Originally posted by Anthony342
View Post
Announcement
Collapse
No announcement yet.
A time machine that could actually deliver and tell us who would win mythical match ups... Is it more of a reality than we think?!
Collapse
-
-
Originally posted by billeau2 View Post
They managed well on Star Trek with the technologies and even for the aliens, for the most part... never going full Irwin Allen on the projects. Rodenberry probably had a big say in this. BUT occasionally? I can think of some episodes of Star Trek (my favorite show as a kid) including one alien in a hideous Halloween costume, and another with antennas that looked like pipe cleaners and styrafoam. Those were the days, no CGI to speak of.
Comment
-
Originally posted by billeau2 View Post
Probably so... I do remember that the series lasted an incredibly long time... The robots, both Robbie and the evil robot are in the Seattle musuem for science fiction. They have a full scale model of the Alien as well... scary looking!!
Comment
-
Originally posted by billeau2 View Post
They managed well on Star Trek with the technologies and even for the aliens, for the most part... never going full Irwin Allen on the projects. Rodenberry probably had a big say in this. BUT occasionally? I can think of some episodes of Star Trek (my favorite show as a kid) including one alien in a hideous Halloween costume, and another with antennas that looked like pipe cleaners and styrafoam. Those were the days, no CGI to speak of.
Comment
-
Originally posted by Bundana View Post
I would be interested in your opinion, on how far down the road you think it will be, before a self-learning computer will be aware of it's own situation - you know, realizing that it has been created by humans (like we see in so many Sci-Fi movies). And how would it feel about that? Would it develop something that resembles human "emotions"?. Could we imagine, that one day a computer would be "moved" by a sad film, or something exceptionally beautiful?
How would such a computer react, if we show it a clip like this?:
Would it be completely indifferent, or would it think: "Damn, why do they keep torturing me with these gorgeous women (we're assuming it's a male computer - but I guess it also works with a female, lesbian one!)... when they know only too well, that I can't do anything about it?"
“The avatar smiled silkily as it leaned closer to him, as though imparting a confidence. "Never forget I am not this silver body, Mahrai. I am not an animal brain, I am not even some attempt to produce an AI through software running on a computer. I am a Culture Mind. We are close to gods, and on the far side.
"We are quicker; we live faster and more completely than you do, with so many more senses, such a greater store of memories and at such a fine level of detail. We die more slowly, and we die more completely, too. Never forget I have had the chance to compare and contrast the ways of dying.
[...]
"I have watched people die in exhaustive and penetrative detail," the avatar continued. "I have felt for them. Did you know that true subjective time is measured in the minimum duration of demonstrably separate thoughts? Per second, a human—or a Chelgrian—might have twenty or thirty, even in the heightened state of extreme distress associated with the process of dying in pain." The avatar's eyes seemed to shine. It came forward, close to his face by the breadth of a hand.
"Whereas I," it whispered, "have billions." It smiled, and something in its expression made Ziller clench his teeth. "I watched those poor wretches die in the slowest of slow motion and I knew even as I watched that it was I who'd killed them, who at that moment engaged in the process of killing them. For a thing like me to kill one of them or one of you is a very, very easy thing to do, and, as I discovered, absolutely disgusting. Just as I need never wonder what it is like to die, so I need never wonder what it is like to kill, Ziller, because I have done it, and it is a wasteful, graceless, worthless and hateful thing to have to do.
"And, as you might imagine, I consider that I have an obligation to discharge. I fully intend to spend the rest of my existence here as Masaq' Hub for as long as I'm needed or until I'm no longer welcome, forever keeping an eye to windward for approaching storms and just generally protecting this quaint circle of fragile little bodies and the vulnerable little brains they house from whatever harm a big dumb mechanical universe or any conscience malevolent force might happen or wish to visit upon them, specifically because I know how appallingly easy they are to destroy. I will give my life to save theirs, if it should ever come to that. And give it gladly, happily, too, knowing that trade was entirely worth the debt I incurred eight hundred years ago, back in Arm One-Six.”
It's impossible to know when or if such a emergent intelligence might arise but of all the speculative fiction that deals with the topic Iain Banks brings us viscerally closer than anyone since **** to imagining how one might think. What I would say is that research into animal conciousness is continually forcing us to reevlauate just what we mean by the term 'self awareness' and giving us good reason to think that it might be something far less unique than we have historically imagined.
Another thing worthy of consideration that one key area in machine learning is the idea of want or desire... Ok so you set a machine loose on the world with the equipment to analyse data at incomparable rates, cross reference, whatever... but why should this wonder machine do anything but sit on it's proverbial arse? What makes us - humans, animals, whatever - get off our arses and do stuff? OK so we actually got biological needs to meet, imperatives, water, food, shelter, company/sex - a hierarchical order and we are driven to do such things by the demands our physical bodies make on us, but over and above that we are pre-programed, if you like, with curiosity - the urge to explore and investigate, to try **** out and see what works and see if it can benefit us. Now current thinking in AI goes very much along the lines that to create a genuine AI you will need to essentially give it both these things... program it with (for want of better terms) 'desires' or 'needs,' and the ability and desire to investgate everything within the range of it's senses (presumably data for a purely software based AI), cos without desire, without purpose why should it do anything at all?
Just as a final thought, could personality - human personality - not in some ways also be described as a set of desires or aversions weighed to different degrees, perhaps visualised as a mixing desk or somesuch with sliders set to different levels of curiosity and openess, caution or confidence in approaching the sensory data from which we construct our subjective worlds?billeau2 likes this.
- Likes 1
Comment
-
Originally posted by Citizen Koba View Post
― Iain M. Banks,
Bob Lazar IMO one of the most credible whistle blowers described an alien propulsion system as a piece of machinery with no moving parts, a generator using gravity field fluctuations to attain amazing feats of flight. This system is foreign to us among other reasons because we have consciousness that still equated the law of propulsion and conservation of energy with a mechanical process. Also because the technology still escapes us.
Which is to say, we make assumptions about artificial intelligence as well. For example, we assume it would have some inkling of superiority. We assume it is reliable. We assume it lacks empathy... BUT what IS Empathy? It is simply an abstract of self that can posit cause and effect on self based on observing another. This should be attainable for a machine, it is reasonable.
My point is that collectivily IMO I think our species was caught in a snafu. Most epistemology of artificial Intelligence never assumed there was a difference between intelligence and consciousness. The assumption always has been consciousness is a product of evolution, intellect, etc. Yet, as babies human beings are among themost useless, and ill equipped! It seems that consciousness comes to us way before we can learn operantly. So does that mean things not learned operantly are not part of intelligence? thiings like language? Are they part of consciousness? an Instinct of sorts?
More questions than answers... We live in an age where we have to rightly say machines can have intelligence. And this milestone just has skipped by with hardly a peep.Citizen Koba likes this.
- Likes 1
Comment
-
-
Cool article on Deep Learning for anyone interested, it ain't in particular depth but a kinda overview for anyone interested...
The ultimate goal of AI scientists is to replicate the kind of general intelligence humans have. And we know that humans don’t suffer from the problems of current deep learning systems.
“Humans and animals seem to be able to learn massive amounts of background knowledge about the world, largely by observation, in a task-independent manner,” Bengio, Hinton, and LeCun write in their paper. “This knowledge underpins common sense and allows humans to learn complex tasks, such as driving, with just a few hours of practice.”
Elsewhere in the paper, the scientists note, “[H]umans can generalize in a way that is different and more powerful than ordinary iid generalization: we can correctly interpret novel combinations of existing concepts, even if those combinations are extremely unlikely under our training distribution, so long as they respect high-level syntactic and semantic patterns we have already learned.”
Scientists provide various solutions to close the gap between AI and human intelligence. One approach that has been widely discussed in the past few years is hybrid artificial intelligence that combines neural networks with classical symbolic systems. Symbol manipulation is a very important part of humans’ ability to reason about the world. It is also one of the great challenges of deep learning systems.
Bengio, Hinton, and LeCun do not believe in mixing neural networks and symbolic AI. In a video that accompanies the ACM paper, Bengio says, “There are some who believe that there are problems that neural networks just cannot resolve and that we have to resort to the classical AI, symbolic approach. But our work suggests otherwise.”
The deep learning pioneers believe that better neural network architectures will eventually lead to all aspects of human and animal intelligence, including symbol manipulation, reasoning, causal inference, and common sense.billeau2 likes this.
- Likes 1
Comment
Comment