A decent basketball player as well.@dustfingers
Just came back at home from a lecture given by Michael Jordan. That man is a fecking machine. I had the impression from that lecture that pretty much no one in the room (a few dozens of them with PhD, many professors at universities) understand most of the stuff he was doing and the way he was easily to make relations between stuff from statistics, function analysis, optimizations to each other but also to other disciplines like quantum mechanics was fascinating. In addition, he seems to speak several languages and being a great communicator.
The biggest show of intelligence I have ever seen in my life.
Yeah, after the lecture he started dunking repeatedly a tennis ball in a bottle.A decent basketball player as well.
@dustfingers
Just came back at home from a lecture given by Michael Jordan. That man is a fecking machine. I had the impression from that lecture that pretty much no one in the room (a few dozens of them with PhD, many professors at universities) understand most of the stuff he was doing and the way he was easily to make relations between stuff from statistics, function analysis, optimizations to each other but also to other disciplines like quantum mechanics was fascinating. In addition, he seems to speak several languages and being a great communicator.
The biggest show of intelligence I have ever seen in my life.
Padua (Italy). I don't study there but met last week in a conference a professor of Uni of Padua and he invited me.
I must say that I didn't understood most of the stuff in the lecture but still it was quite fascinating. It also seems to me that he has become from a neural network skeptic to a hater, and now is doing more theoretical stuff. In other words he has gone full Vapnik.
An another interesting thing is that he opened the lecture in perfect Italian.
I read that they basically pushed it out the doors for some publicity as they are due to release a new robot soon...I read today that a Russian AI robot escaped from its facility for the second time, it was programmed to learn about its surroundings and they're thinking of switching it off because it keeps wandering off on its own.
He didn't explain much, but it looked for these reasons:Why is he a hater? Does he think neural networks are being overused?
The end is nigh.I read today that a Russian AI robot escaped from its facility for the second time, it was programmed to learn about its surroundings and they're thinking of switching it off because it keeps wandering off on its own.
He didn't explain much, but it looked for these reasons:
1) ANN getting over-hyped (mostly by media) and the brain parallels which are nonsense (although to be fair, all main ANN researchers don't make those parallels but the media do so).
2) ANN are in reality very simple. It is just applying gradient descent into a large hypothesis and on large data. It isn't the most scientific method right there, and it is almost completely empirical (at the moment, Jordan is doing research on making the big data field a bit more scientific, not just the usual trial and error).
3) He seems to be unconvinced about their power. For example, he said that he doesn't think that they will solve the problem of natural language processing (despite that every company uses ANN for that), though he said that they will maybe solve the problem of vision (again, deep learning algorithms lead every benchmark on vision). He didn't mention other fields like autonomous driving (again, deep learning is leader) or Alpha Go.
He seems to unappreciate everything that isn't heavy on maths and so it doesn't have a very good theoretical formulation. Which might be while he left the field of neural networks in the beginning of nineties. Or he might just be on that mood for now. I read an interview of LeCun some time ago, when he said that Jordan reinvents himself every 5 years or so, changing completely the field he's working. And that is so, if you look at his career. He has a master in physics, a PhD in cognitive sciences, then started working on machine learning, did some contributions on neural networks, changed the field of study to statistical models (latent dirichlet allocation) to clustering (spectral clustering) and now doing this research on theoretical data science and accelerated methods (optimization).
A master of all trades.
Yeah, but on regression and SVM you know exactly what is going on, and the strength is on the algorithm, which also have awesome theoretical justifications. On ANN (which I love, btw), the strength is on having a very large hypothesis and then optimizing the cost function of it. A bit like infinite monkeys writing Shakespeare (btw, the nvidia ANN for autonomous driving has more than 40 billion parameters, and probably there are bigger ANN right there).Everything from regression to ANN to SVMs is over-hyped. Failure is not an option. We're gonna find a pattern or die trying, is the mindset promoted by the media/industry.
The big data field is kind of like the Wild Wild West, lacking the mathematical rigour and peer review process of more mature disciplines (like statistics for example). That will come with time.
Chimpanzees are capable of reasoned thought, abstraction and have a concept of self. Chimps use reasoned thought when they process information and use their memory, for example when finding fruit according to what season it is. Chimps are capable of generalization and symbolic representation, as they are able to group symbols together, and some chimps have even learned how to use American Sign Language. Chimps also have a “concept of self”, which refers to an individual’s perception of their being in relation to others
It is complex and we don't understand it, but I don't think that there is any reason to believe that there is something magical there, or that we won't ever understand it.Pogue says the brain is just an information processor made of meat, but I don't think that's the whole truth.
There is something magical, for want of a better word, about the brain. It wasn't wired up from a circuit diagram. The brain evolved over half a billion years into an almost infinitely complex jumble of interconnections. It's an emergent system whose behaviour as a whole cannot be deduced from a study of its individual parts. We will never understand it.
And our lack of understanding will stymie our attempts to duplicate its functions.
Our best shot, imo, is to try to echo biological evolution itself. To create clusters of virtual 'neurons' and allow them to 'evolve' in response to the challenges of their cyber-environment. Whether the result would be anything useful though..
at the moment we don´t even know how to approach the creation of hard AI. I agree, that we´ll get there, but it is a lot further away than many people (who are hyping this topic) suggest.
He's up there with one of the greatest mind of the current Era probably outside of physics. I agree with him in terms of neural network these days though. Human brain is unsupervised or at least very weakly supervised while ANN is all about having millions of labeled samples, which probably cannot be feasible in solving practical sparse data problems.@dustfingers
Just came back at home from a lecture given by Michael Jordan. That man is a fecking machine. I had the impression from that lecture that pretty much no one in the room (a few dozens of them with PhD, many professors at universities) understand most of the stuff he was doing and the way he was easily to make relations between stuff from statistics, function analysis, optimizations to each other but also to other disciplines like quantum mechanics was fascinating. In addition, he seems to speak several languages and being a great communicator.
The biggest show of intelligence I have ever seen in my life.
I don't think it's going to be all doom and gloom by 2060s. While AI probably will have a role to play in terms of determining future job markets via self driving car and other innovative ideas, but it is no different than when machine started replacing human labor after industrial Era.I agree but some of it depends if Moore's Law holds up, which some think will come to an end in the near future, others believe they'll just increase the volume of chip-sets and find a way to keep them cool. (3D chips).
The issue is with exponential hardware increases by 2029 we'll have a computer with the processing power of a human being, by 2045 by every single human being on the planet.
Then think that by that point we'll have much more people doing Machine Learning and Computer Science related fields as their career, they'll be much more mature in thinking and there'll be new leaps (we'll ourselves have Children who are aged <6 now coming through schools learning Computer Science from a young age, graduating from college and going into the field for 6-7 years and working on this problem).
There was a Nobel Prize panel who spoke about ASI and Singularity not so long ago, it is a case of "when", Microsofts chief basically said we're not far off 100% speech recognition, language is only a few years off, computer vision won't take longer than a decade, so some portions of what you think it is to be human is already there.
(You may believe oh but since the computers invention we haven't come that far! you're forgetting the exponential increase in processing power, it goes 2,4,8,16 etc only recently have we exceeded the brain power of things like birds, mice).
Throw in the fact 2045 if you believe Moore's Law will hold up we'll be able to run huge simulations of the human brain, Nanotechnology might be able to construct something similar to a human brain for experimentation, (we'll be atleast 15 years into a nano revolution picture NOW and the Y2K that's how far it'll be (and it was just after the year 2000 we went beyond a computer with the processing power of an insect brain).
Neuroscience will understand the brain much more by computer simulation, advanced scanning tools and medical understanding. I think by the 2040s we'll have AGI general intelligence and people like Nick Bostrom, Bill Gates, Stephen Hawking believe onces AGI is here ASI will soon follow.
For those who doubt ASI will soon follow AGI think of this, a computer who has just exceeded any living humans intelligence, can read through every single written work including scientific papers, use pattern recognition logic and reasoning within <10 seconds (a human can't even do that in a lifetime).
Now imagine the fact a human being can work maybe 6-8 hours without getting tired and slowing down (much less than that, factoring in breaks, daydreaming etc). Now think that it'll have the hardware of every human combined on the planet, imagine this AGI can function 24/7 running in multiple clusters and threads probably 100,000 + brain simulations functioning at the same time.
This AGI will be able to refine its own software, look for improvements, and Deep Learning has already demonstrated learning from mistakes (and think how by the 2040s we will have much better algorithms for this). ASI won't be long after, we're talking years, not decades.
I really believe by 2060s latest we'll have it, and I know some here believe we won't but we're talking about the same roughly between now and the 1980s when the very first PC and Walkmans were available, the acceleration is due to computational power more than anything.
For those on this thread who are young, take good care of yourselves because the 2040s-2060s are going to be mind-blowing compared to now.
The main thing is probably how many examples brain needs and how many examples does a computer need to truly be really good at recognizing things and listening properly. The supervised learning problem is modeled as a simple curve fitting approach while minimizing the classification error which is a lot more easier and controlled than learning what the classification task is what the features are. Also it would be interesting to see how joint learning models perform like human does. And creativity is so far away from even being understood to actually see machine replicate it.It is complex and we don't understand it, but I don't think that there is any reason to believe that there is something magical there, or that we won't ever understand it.
AI already beats brain in many pattern things and logical games. It already beats average human in image recognition, it can find the voice from a silent video (something that the human brain can't) and withing 2-5 years, it will drive cars quite better than us. I don't know if we'll get 'hard AI' during our lifetime, but I think that eventually we'll get there.
I don't think it's going to be all doom and gloom by 2060s. While AI probably will have a role to play in terms of determining future job markets via self driving car and other innovative ideas, but it is no different than when machine started replacing human labor after industrial Era.
I don't do neuroscience research but based on what I have heard from experts, it's highly unlikely we could reach full understanding of brain within next twenty or thirty years. Even consciousness is super hard to explain and will probably take decades just to understand that. Without good theory of how brain works or how intelligence is learned and acquired, no amount of machine resources is going to help in creating true intelligent system.
I think it would be truly fulfilling when we reach at an point that we truly understand human brain and we are actually able to replicate that into some form of chemicals system so that human consciousness or sense of being oneself can be transferred from brain to other mechanical parts. This wouls probably mean no disease and also attainment of immortality but probably useless human beings too. That would probably be the future someday.It's really hard to say yes or no to what'll happen, the computational power will be there to do anything. It'll have huge applications in Medicine and every other walk of life. Even if there wasn't AGI there'll be some very good applications, ANI will be beyond any human ability.
I firmly believe it'll be this century, I think it'll be no later than 2060-70 but then again it's all guess work as nobody knows, we don't know what theories will be put down when someone will have Eureka moments. Where the funding will be, that the right people go to the right schools and get the right jobs.
I don't see AGI as doom and gloom though, as-long as there's good procedures in place and the logistics are figured out.
Yes full understanding of the brain is a huge field that we don't understand still to this day, but once diseases like cancers are cured, isn't the last big mystery for humanity AI and how the brain truly functions? won't there be a lot more people researching those subjects?
I think it would be truly fulfilling when we reach at an point that we truly understand human brain and we are actually able to replicate that into some form of chemicals system so that human consciousness or sense of being oneself can be transferred from brain to other mechanical parts. This wouls probably mean no disease and also attainment of immortality but probably useless human beings too. That would probably be the future someday.
Another would be using some form of flash memory to learn new subject like insert flash card into human brain and we are expert in quantum computing all of sudden. That is why the secret of unsupervised learning is so important to find out.
I don't think it is going to be doom any time though. Because everything is going to evolution rather than a revolution.
Right now we see limitations because we're only at around one mouse brain, so its really ANI.
I don't know much about this issue except in philosophical terms, so I'm wondering what's meant by this. Are you saying there's a machine out there that, provided all components could be appropriately miniaturized, can perform all the functions of a mouse? Would be indistinguishable in its behaviour from an advanced biological organism? Or do you mean it has the processing capacity of a mouse brain?
It seems to me that this distinction is not properly respected by advocates of AI. By analogy, it's like assembling all the chemical constituents of a human body, dumping them on the floor, and saying: 'There, that's a human being.' But the trick is in the interconnection of the component parts.
Brute processing capacity is clearly not the key to brain function. And it doesn't work by algorithms. The idea that stumbling on the right algorithm will magically transform a mass of dead circuitry into an intelligent entity is no more than an act of faith. And probably a misplaced one.
As far as we know the only thing capable of intelligent thought in the Universe is the biological brain. AI seems to be largely an attempt to arrive at the destination of intelligent functioning by taking a different route. But there may be no other route.
To me it seems likely that the architecture of the brain is inseparable from its function. Intelligence is what a brain does. The two cannot be separated. It's a unique one-to-one mapping.
proof of self recognization in chimps, dogs doesn't possess this trait.
The concern for me isn't an AI that has any deliberate intent to cause harm but just dangerous unpredicted emergent behaviour, Working with even the very simple AI used in video games, you see this all the time. Very simple AI routines bump against each other and produce totally unpredicted results. The more complex AI routines get, the harder it'll be to predict what possible outcomes you might see and to mitigate them.
Game AI isn't the most sophisticated AI out there though, plus they're probably poorly optimised, if you want to see good route finding AI look at driver-less cars. (linked below very good watch by the way).
That's kind of my point though, that they're usually extremely unsophisticated yet even then the challenge of foreseeing how relatively simple routines will interconnect in any given situation is extremely challenging. The more complex and interconnected systems become, I struggle to see how it could even be possible to foresee any potential negative outcome. In a video game that doesn't really matter, but in systems that could be responsible for say infrastructure, the dangers are unimaginable.
The reason this is great is they have hired the creator of Google Translate who has worked on that for nearly a decade to basically work with the genome and analyse it, this could mean you are a blood sample in the future away from a computer being able to go the sequence "X Y Z B L" corresponds to pancreatic cancer! you can just detect it without going into a doctors surgery and saying your symptoms.