Artificial Intelligence

I'm not a fan of a heavily data-driven genomics approach for all diseases, especially if they are ignoring epigenetics altogether in their sequencing (and at those speeds they must be). At the end of the day each polymorphism you find significantly different in cancer patients vs population will mean a higher cancer probability by some fraction of a percent. At least that's what I've seen after attending many talks by these genomics guys.
Integrating a genomics+epigenomics result with a mechanism (why this gene, why this mutation) is IMO better.

But then I don't work on diseases, and I'm really ignorant about a lot of the newer data methods, so I could be wrong.

I'm not working in a medical profession either but a lot of computer science is currently starting to focus on medical care, it does have it's application and biologically if we can say we have certain symptoms to a doctor and they can come out with a conclusion of what disease we may have how is that any different to having large data and medical records from the past driving future medical care?

A lot of what is being done seems to be aimed at prevention more than cure now, but then again we'll have to see the fruits of the labour in the next decade or so. I guess we're also going to see who arrives at the answer first, and which one is more accurate, and more importantly - why.
 
Just to reply to you RE: yesterday, and the whole 2040s-2060s prediction.

Do you not think when a computer has the right algorithm and the processing power of collectively every human on the planet (after every human on the planet the next year again is another exponential increase). Do you not think it only takes the right idea or algorithm for there to be an explosion at that point?

We're talking within a day or a week with the right idea it could really do some serious learning and development, we have to think about the power available at that point too.

I'm talking about intelligence here, not motor functions or dexterity, I think that'll be done in the 30s, I'm talking about true intelligence and learning here.

Right now we see limitations because we're only at around one mouse brain, so its really ANI.

I just see as AI gets better in the next 10 years, more graduates will switch from doing Web, Software engineering to pure AI, and Universities will start to run courses solely in AI too. That again is another leap.
Devising the right algorithm is the most important thing and we are yet to devise anything that can be essentially called unfolding of pure intelligence. Right now, machine intelligence is just a curve fitting problem to a bunch of points in whatever way we want to call it. It needs an evolutionary revolution in mathematical foundations (theory), physics and cognitive neuroscience to actually make an impact [Machine Intelligence research is application of advances in maths, cognitive science and various other sciences like an engineering design.]. We can make a machine drive a car by collecting millions of driving videos and training on them but that is a lot like non-linear multivariate curve fitting that intelligence.
I agree with interest being more on AI in next few decades though. It is definitely going to be a major part of our education. But whether that would result in secrets of intelligence being uncovered is debatable. I personally don't think so. In fact, many researchers even say that the goal of AI these days is not to make intelligent machines, just system that are good at one particular task like an expert system.
BTW are you into machine learning and AI itself?
 
Game Developers don't really care about AI though.

Most games are multiplayer so it's not really in their budget, they'd much rather have nice level design, environments, visuals then have good AI.

Game Developers are looking at fun mainly. AI doesn't really factor in.

I'll mention that to my AI programmer friends who spend endless months trying to get their AI routines to not do wildly unpredictable things. ;)
 
I don't know much about this issue except in philosophical terms, so I'm wondering what's meant by this. Are you saying there's a machine out there that, provided all components could be appropriately miniaturized, can perform all the functions of a mouse? Would be indistinguishable in its behaviour from an advanced biological organism? Or do you mean it has the processing capacity of a mouse brain?

It seems to me that this distinction is not properly respected by advocates of AI. By analogy, it's like assembling all the chemical constituents of a human body, dumping them on the floor, and saying: 'There, that's a human being.' But the trick is in the interconnection of the component parts.

Brute processing capacity is clearly not the key to brain function. And it doesn't work by algorithms. The idea that stumbling on the right algorithm will magically transform a mass of dead circuitry into an intelligent entity is no more than an act of faith. And probably a misplaced one.

As far as we know the only thing capable of intelligent thought in the Universe is the biological brain. AI seems to be largely an attempt to arrive at the destination of intelligent functioning by taking a different route. But there may be no other route.

To me it seems likely that the architecture of the brain is inseparable from its function. Intelligence is what a brain does. The two cannot be separated. It's a unique one-to-one mapping.
Definitely not. It is a common misconception (similar to AI will reach human intelligence in 2049).

What we have now are neural networks which are as large as a mouse's brain, and if the Moore's law hold, it is forecasted that in 2049 we will have neural networks as large as human brains. However, being as large doesn't mean being as intelligent, and neural networks are very primitive compared wit the human's (and other animals) brain. How they work at the moment is a curve fitting approach when we give them a ton of samples and they learn to classify them, but very likely that isn't how the brain works.

In order to achieve true intelligence, the approaches need to change.
 
First place we need the quantum computing to exist (they think human brain is 30 times faster than any supercomputer) and then we would need a tremendous storage drive, they think we humans have the capacity of 2.5 petabytes, some they think we are closer to 30 petabytes.

http://spectrum.ieee.org/tech-talk/...rain-30-times-faster-than-best-supercomputers
What a heck does this mean?

Human brain has large processing power because of its large number of neurons and weights (synapses) between them, but even a simple calculator is faster than human brains in tasks like arithmetics. Nowadays, computers are getting better (and faster) at many other more complex things like image recognition (best ANN outperform humans in ImageNet dataset), car driving, games like Go and so on.
 
Devising the right algorithm is the most important thing and we are yet to devise anything that can be essentially called unfolding of pure intelligence. Right now, machine intelligence is just a curve fitting problem to a bunch of points in whatever way we want to call it. It needs an evolutionary revolution in mathematical foundations (theory), physics and cognitive neuroscience to actually make an impact [Machine Intelligence research is application of advances in maths, cognitive science and various other sciences like an engineering design.]. We can make a machine drive a car by collecting millions of driving videos and training on them but that is a lot like non-linear multivariate curve fitting that intelligence.
I agree with interest being more on AI in next few decades though. It is definitely going to be a major part of our education. But whether that would result in secrets of intelligence being uncovered is debatable. I personally don't think so. In fact, many researchers even say that the goal of AI these days is not to make intelligent machines, just system that are good at one particular task like an expert system.
BTW are you into machine learning and AI itself?

Doing my Masters Degree in October in Computer Science to finish off my education, maybe in a decade or so I'll consider a doctorate.

I've had my eye on AI for a while as an interest (a future goal maybe) but recently I've really started considering it as a career path, it's a really interesting subject that until a few months ago I didn't really think this could be something great. Prior to doing more background reading I was considering software engineering, now I think if I did AI at-least I can contribute to something much greater. (Still yet to make that choice though)
 
Definitely not. It is a common misconception (similar to AI will reach human intelligence in 2049).

What we have now are neural networks which are as large as a mouse's brain, and if the Moore's law hold, it is forecasted that in 2049 we will have neural networks as large as human brains. However, being as large doesn't mean being as intelligent, and neural networks are very primitive compared wit the human's (and other animals) brain. How they work at the moment is a curve fitting approach when we give them a ton of samples and they learn to classify them, but very likely that isn't how the brain works.

In order to achieve true intelligence, the approaches need to change.

I'm sure I'm way behind the curve here, but I remember seeing a TV programme about 25 years ago in which a neural network was taught to recognize faces. Eventually it was capable of distinguishing between male and female faces more accurately than a human being. I was very impressed that a machine could learn such a human-like function, and thought this was the shape of things to come. I waited for further developments, but nothing happened. It all went quiet.

Then a few years ago I read that neural networks hadn't lived up to their early promise. The amount of time required for them to learn to perform more complex tasks increased exponentially.

What's the state of play now? Are they still considered a way forward?
 
I'm sure I'm way behind the curve here, but I remember seeing a TV programme about 25 years ago in which a neural network was taught to recognize faces. Eventually it was capable of distinguishing between male and female faces more accurately than a human being. I was very impressed that a machine could learn such a human-like function, and thought this was the shape of things to come. I waited for further developments, but nothing happened. It all went quiet.

Then a few years ago I read that neural networks hadn't lived up to their early promise. The amount of time required for them to learn to perform more complex tasks increased exponentially.

What's the state of play now? Are they still considered a way forward?
Definitely there weren't neural networks who were able to classify faces better than humans 25 years ago. In the beginning of nineties there were neural networks (called LeNet) who was able to do a very good job on classifying digits (a much easier job than face recognition/classification) though not as good as humans (but close enough).

ANN went out of favor in the mid-nineties, but came back stronger than ever in the last few years, now labelled under the term of 'deep learning'. Among other things, they now are better than humans at image classification (although at times they might give completely batshit crazy answers, so are they really better?!), quite good at natural language processing (not as good as a human translator though), good at driving cars (Baidu's autonomous car which leads in benchmarks is ANN, same about nVidea, not sure about Google's), AlphaGo used deep learning together with a search tree algorithm, Baidu is also making a device which will help visually impaired people and so on. At the moment, deep learning might be the biggest thing in science, let alone in AI (it is for sure the biggest thing in computer science).

However, they aren't nowhere near human-level intelligence. They are good at classifying things assuming that you give them a ton of samples during the training stage (and you tune them well, avoid overfitting and so on) because of their large hypothesis, but they are more similar to 'infinite monkeys writing Shakespeare' rather than a 'true intelligence'.

I think that ANN will likely be the future of AI, but it will change a lot in the next years/decades, and I have no idea at all if ANN algorithms will be those which will reach true intelligence, or some other algorithm coming in the future will do so. From the history, expect ANN to drop dead in a few years, and to come back in a couple of decades. That happened with primitive neural networks (perceptrons) and with multilayer perceptrons powered by back propagation.
 
Doing my Masters Degree in October in Computer Science to finish off my education, maybe in a decade or so I'll consider a doctorate.

I've had my eye on AI for a while as an interest (a future goal maybe) but recently I've really started considering it as a career path, it's a really interesting subject that until a few months ago I didn't really think this could be something great. Prior to doing more background reading I was considering software engineering, now I think if I did AI at-least I can contribute to something much greater. (Still yet to make that choice though)

For what is worth, I think that AI is more related to mathematics than computer science. Sure, you will eventually need to program your ideas/algorithms, but that is the easy part. But to truly excel at machine learning, you will need a background in mathematics similar to that of physicists (so a shitload of maths). Deep Learning is arguably the least maths-heavy machine learning subfield, which is one of the reasons I chose it as my PhD field.
 
For what is worth, I think that AI is more related to mathematics than computer science. Sure, you will eventually need to program your ideas/algorithms, but that is the easy part. But to truly excel at machine learning, you will need a background in mathematics similar to that of physicists (so a shitload of maths). Deep Learning is arguably the least maths-heavy machine learning subfield, which is one of the reasons I chose it as my PhD field.

That's what concerns me, any written material you can recommend? I have gone quite heavy into some discrete mathematics over the last few months, what would you recommend? (I'm aware going forward It won't be something I'll learn from an academic background but in my own time).
 
Last edited:
On a side note, hearing comp scientists and theoretical physicists talk gives me a real inferiority complex. I suck at even entry-level maths and my admission to undergrad was saved by my chem and physics scores...In the 1st year we had 3 compulsory maths courses, and my roommate was a natural (he sucked at understanding what an equation in physics meant in real life though). I ended up losing out because I couldn't bear to study maths with him and keep asking how/why he was making some substitution.
 
That's what concerns me, any written material you can recommend? I have gone quite heavy into some discrete mathematics over the last few months, what would you recommend? (I'm aware going forward It won't be something I'll learn from an academic background but in my own time).
Gilbert Strangs' lectures in Linear Algebra are a joy to watch: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/

You will obviously need some calculus: http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/ and http://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/

in addition to some probability: http://ocw.mit.edu/courses/electric...s-analysis-and-applied-probability-fall-2010/

Assuming that you have this type of knowledge then you will be able to read most of research papers.

Some good machine learning books are (in order from easiest to most difficult):

1. Thom Mitchell - Machine Learning https://www.amazon.co.uk/MACHINE-LE...=1471560145&sr=8-10&keywords=machine+learning

2. Duda, Hart and Stork - Pattern Classification - https://www.amazon.co.uk/Pattern-Cl...-1&keywords=pattern+recognition+duda+and+hart

3. Chris Bishop - Pattern Recognition and Machine Learning - https://www.amazon.co.uk/Pattern-Re...d=1471560145&sr=8-5&keywords=machine+learning

4. Kevin Murphy - Machine Learning, A Probabilistic Approach - https://www.amazon.co.uk/Pattern-Re...d=1471560145&sr=8-5&keywords=machine+learning

This should cover most of the machine learning stuff, while for deep learning, this is the best book (won't be available in printed format until the new year) :http://www.deeplearningbook.org/

...

If you want to become a true expert in the field, then I guess you have to study far more and master the mathematics of machine learning. Michael I Jordan recommends these books and I hope to be able to read all of them: http://www.statsblogs.com/2014/12/3...-suggested-by-michael-i-jordan-from-berkeley/

Most of the books are in frequentist statistics, but there are also books in Bayesian statistics, linear algebra, optimizations (extremely needed for machine learning), information theory (a very nice sub-discipline and you will find the concept of entropy everywhere), probability and function analysis (needed mostly for kernel methods but also for lower/upper bounds of functions). I would also recommend to study a bit of game theory, it can be incredibly useful and you can do AI by using purely game theoretical approaches.
 
Gilbert Strangs' lectures in Linear Algebra are a joy to watch: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/

You will obviously need some calculus: http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/ and http://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/

in addition to some probability: http://ocw.mit.edu/courses/electric...s-analysis-and-applied-probability-fall-2010/

Assuming that you have this type of knowledge then you will be able to read most of research papers.

Some good machine learning books are (in order from easiest to most difficult):

1. Thom Mitchell - Machine Learning https://www.amazon.co.uk/MACHINE-LE...=1471560145&sr=8-10&keywords=machine+learning

2. Duda, Hart and Stork - Pattern Classification - https://www.amazon.co.uk/Pattern-Cl...-1&keywords=pattern+recognition+duda+and+hart

3. Chris Bishop - Pattern Recognition and Machine Learning - https://www.amazon.co.uk/Pattern-Re...d=1471560145&sr=8-5&keywords=machine+learning

4. Kevin Murphy - Machine Learning, A Probabilistic Approach - https://www.amazon.co.uk/Pattern-Re...d=1471560145&sr=8-5&keywords=machine+learning

This should cover most of the machine learning stuff, while for deep learning, this is the best book (won't be available in printed format until the new year) :http://www.deeplearningbook.org/

...

If you want to become a true expert in the field, then I guess you have to study far more and master the mathematics of machine learning. Michael I Jordan recommends these books and I hope to be able to read all of them: http://www.statsblogs.com/2014/12/3...-suggested-by-michael-i-jordan-from-berkeley/

Most of the books are in frequentist statistics, but there are also books in Bayesian statistics, linear algebra, optimizations (extremely needed for machine learning), information theory (a very nice sub-discipline and you will find the concept of entropy everywhere), probability and function analysis (needed mostly for kernel methods but also for lower/upper bounds of functions). I would also recommend to study a bit of game theory, it can be incredibly useful and you can do AI by using purely game theoretical approaches.

Cheers, I love OCW by the way, great free education tool, for anyone interested in this thread as a side note too there's so much in the way of free education these days such as edx.org and coursera.org MIT are great with OCW too.

I really appreciate the time you've taken Revan it's a huge year for me in my professional development and ML / AI is something that interests me so maybe I should get out of my comfort zone of Software Engineering and start to push the boat out a little, I will look at ordering some of the written materials you've suggested as well as the Michael I Jordan recommended reading!
 
Cheers, I love OCW by the way, great free education tool, for anyone interested in this thread as a side note too there's so much in the way of free education these days such as edx.org and coursera.org MIT are great with OCW too.

I really appreciate the time you've taken Revan it's a huge year for me in my professional development and ML / AI is something that interests me so maybe I should get out of my comfort zone of Software Engineering and start to push the boat out a little, I will look at ordering some of the written materials you've suggested as well as the Michael I Jordan recommended reading!
Free education is one of the best things that has happened this century. I really love edX, best thing ever (though I quite dislike coursera nowadays). If you are interested, you can do this course (starting next month): https://www.edx.org/course/learning-data-introductory-machine-caltechx-cs1156x given from Abu-Mostafa of Caltech (he is also the co-founder of NIPS, the top ML conference).

I am familiar with most of the stuff there (it covers exclusively supervised learning) but going to do the course anyway cause the instructor is a brilliant lecturer and goes heavy in theory (while also having good programming assignments). You may complement in later with Andrew Ng's course in Stanford (not the dumbed down version on Coursera) which covers also the unsupervised and reinforced learning, but be warned, back then when the course was registered (2008), Ng was nowhere as a good lecturer as when he did the coursera version in 2011.
 
What a heck does this mean?

Human brain has large processing power because of its large number of neurons and weights (synapses) between them, but even a simple calculator is faster than human brains in tasks like arithmetics. Nowadays, computers are getting better (and faster) at many other more complex things like image recognition (best ANN outperform humans in ImageNet dataset), car driving, games like Go and so on.

Sure they are faster on a few tasks but asking a computer if the weather is nice and it gets stuck, we have to process way more than any supercomputer, the fact we are breathing, walking, talking is thanks our brain, the robots we have can't barely walk. We still a few years from developing a true AI and to do that we need a quantum computer and a lot of data storage, until then we have Siri's.

http://spectrum.ieee.org/tech-talk/...rain-30-times-faster-than-best-supercomputers
 


Watson created a trailer for the film Morgan thought it might be worth a link, obviously not big news but nice to see small stepping stones.
 
For what is worth, I think that AI is more related to mathematics than computer science. Sure, you will eventually need to program your ideas/algorithms, but that is the easy part. But to truly excel at machine learning, you will need a background in mathematics similar to that of physicists (so a shitload of maths). Deep Learning is arguably the least maths-heavy machine learning subfield, which is one of the reasons I chose it as my PhD field.

Jesus, and I thought the maths was tough in my HNC in electrical engineering.
 
Jesus, and I thought the maths was tough in my HNC in electrical engineering.
I mean, in order to really shine in ML research, you need to be good at math, similar to someone who has a physics background.

You can definitely do some work in AI without knowing a shitload of maths (still you need decent knowledge in calculus, linear algebra, probability, statistics, information theory and convex optimizations though).
 
On a side note, hearing comp scientists and theoretical physicists talk gives me a real inferiority complex. I suck at even entry-level maths and my admission to undergrad was saved by my chem and physics scores...In the 1st year we had 3 compulsory maths courses, and my roommate was a natural (he sucked at understanding what an equation in physics meant in real life though). I ended up losing out because I couldn't bear to study maths with him and keep asking how/why he was making some substitution.
What were the topics?
 
I mean, in order to really shine in ML research, you need to be good at math, similar to someone who has a physics background.

You can definitely do some work in AI without knowing a shitload of maths (still you need decent knowledge in calculus, linear algebra, probability, statistics, information theory and convex optimizations though).

That's me out then :lol:.
 
Btw, anyone else read about Google's Neural Network developing an encryption algorithm which seems to perform well, but no-one knows how it really works?
 
I read somewhere last year that AI is predicted to overtake human intelligence within 100 years. That is a frightening thought really.
 
I think that AI is more related to mathematics than computer science.
Comp. Science, on the logical layer which is the most relevant here, is pretty much mathematics in a different language. Just as an example you cannot hope to achieve any results in image recognition without using multiple integrals along with linear algebra and so on.
 
Btw, anyone else read about Google's Neural Network developing an encryption algorithm which seems to perform well, but no-one knows how it really works?

Is that the one where two separate AI's developed an algorithmic language so that they could communicate anonymously?
 


Watson created a trailer for the film Morgan thought it might be worth a link, obviously not big news but nice to see small stepping stones.


God damn it now I have to look into a new field of work. But I have to admit it's kind of impressive that even the creative industries aren't out of the question when it comes to automation anymore.
 
Comp. Science, on the logical layer which is the most relevant here, is pretty much mathematics in a different language. Just as an example you cannot hope to achieve any results in image recognition without using multiple integrals along with linear algebra and so on.
Absolutely wrong. The state of the art in image recognition are convolutional neural networks, which don't use integrals at all. It is just sums and partial derivatives.

But the main point stands, AI is just applied mathematics.
I read somewhere last year that AI is predicted to overtake human intelligence within 100 years. That is a frightening thought really.
The median year given by experts is 2049 IIRC, although I think that no-one really has a clue and at the moment, we aren't close to making a hard AI.
Is that the one where two separate AI's developed an algorithmic language so that they could communicate anonymously?
Maybe. I have yet to read the article (I think it hasn't been published yet).
 
Btw, anyone else read about Google's Neural Network developing an encryption algorithm which seems to perform well, but no-one knows how it really works?

Trump does, bigly.
 
When thinking about AI, I always think of the classic line... Are we that concerned with "whether we could, without stopping to think about whether we should?"
 
Local and spectral neighbourhood sums are applied which use exactly the same definition as that of a multivariable integral.
Sums are discrete though, so quite a bit of difference from integrals.
 
When thinking about AI, I always think of the classic line... Are we that concerned with "whether we could, without stopping to think about whether we should?"

My AI lecturer thinks the fear of the singularity moment is stupid and Musk and Hawking use it to create a mystique and promote interest around an important field.
 
My AI lecturer thinks the fear of the singularity moment is stupid and Musk and Hawking use it to create a mystique and promote interest around an important field.

Ah I see. I don't know the first thing about AI by the way. It is a very interesting field though!
 
Sums are discrete though, so quite a bit of difference from integrals.
One Z-Transform away. Of course if you meant that as a difference then sure - but you'd really need a thorough idea of integral calculus to work anywhere in image processing. I worked for an year in bringing down hyperspectral images down to earth from imagers like Aviris and most of the theoretical work done involved dealing with integrals (and sums of course) while working with Prediction Methods. The very core of it.
 
My AI lecturer thinks the fear of the singularity moment is stupid and Musk and Hawking use it to create a mystique and promote interest around an important field.
This is so far away (not talking about a time frame but about our lack of understanding), that wasting any thought on it is fairly pointless. It is like speculating about the problems of interstellar travel, when you havn't even invented the wheel.