Artificial Intelligence

Maybe advancements in neuroscience will outpace advancements in artificial intelligence.
We could master our own consciousness and intelligence to the point that machine intelligence becomes redundant. E.g.:

1) We manipulate the brain to master our own happiness. For example we could exist in a permanent dream-state, an sort of internal simulation, stimulated in such a way to create as close to a perfect "existence" as possible. This might seem far-fetched at the moment, but we're only just scratching the surface of biological sleep - the power it has for us, and the control we can exert over it. This would probably involve some machine technology, but not necessarily AI.

2) We "artificially evolve" our minds to become ever more rational and logical. We could end up rationalising even our own existence, overriding our most fundamental instincts - to survive and procreate.

Both scenarios ultimately lead to a gradual slide into peaceful human extinction rather than what could be perceived as the never-ending futility of species survival.
Even if those don't happen, I wouldn't be surprised if artificial intelligence comes in a more biological form - a "grown" intelligence or superorganism rather than a machine intelligence. But I might be overlooking quantum computing.
Out of curiosity, what does quantum computing has to do with it?

From what I know, despite that our brain is a quantum system (same as everything else in the universe), experts seem to believe that quantum mechanics plays no role in intelligence and conscience. Also, most experts seem to think that the main thing we would get from quantum computing is that we will be able to directly simulate quantum systems there, not that it will create more powerful supercomputers.

I know that there is quantum machine learning, but ai have to talk with anyone who thinks that will be a big thing in the near to mid future. Even if quantum computing becomes a thing, it is very hard to see it being useful for machine learning (unlike for example, for cryptography where quantum computing will be a dealbreaker).
 
What I mean is that they are not working on the field, so they do not know much about the field. Their opinions for most part are as important as that of some random people. They simply are not experts and are talking for stuff they do not know much about (Musk might know a bit more simply cause he is surrounded by people who know a lot about it and has been attending ML conferences, but he definitely does not have a strong technical background on it).

I don't know. I don't think any of them are portaying themselves as experts in the tech side of AI. Sociatal implications won't require a deep understanding of the technical aspects of creating or maintaining an AI. I wouldn't much trust Zuckerberg or Dorsey telling us all it will be fine, don't worry, Facebook and Twitter are a force for good in the world.
 
Out of curiosity, what does quantum computing has to do with it?

From what I know, despite that our brain is a quantum system (same as everything else in the universe), experts seem to believe that quantum mechanics plays no role in intelligence and conscience. Also, most experts seem to think that the main thing we would get from quantum computing is that we will be able to directly simulate quantum systems there, not that it will create more powerful supercomputers.

I know that there is quantum machine learning, but ai have to talk with anyone who thinks that will be a big thing in the near to mid future. Even if quantum computing becomes a thing, it is very hard to see it being useful for machine learning (unlike for example, for cryptography where quantum computing will be a dealbreaker).
I agree with most of that. As a relative layman, my understanding was that quantum computing isn't really a precursor to AI, but perhaps I wasn't aware of some creative application of it, i.e. I don't have in-depth knowledge of it.
I have written an essay on quantum cryptography (my highest mark of anything at uni) but don't remember much.
 
I don't know. I don't think any of them are portaying themselves as experts in the tech side of AI. Sociatal implications won't require a deep understanding of the technical aspects of creating or maintaining an AI. I wouldn't much trust Zuckerberg or Dorsey telling us all it will be fine, don't worry, Facebook and Twitter are a force for good in the world.
What I mean is that, we aren’t really on a path to building a general purpose AI. No one really knows how to even start doing it. There have been some promising developments on meta-learning (especially reinforcement based) but we are nowhere near even starting making a true AI.

Saying that, the implications of AI are already there. AI systems are biased (mostly cause the data used to train them come from humans, who are biased) and that can play a large role in their decisions, a role that might be harmful for women, black people etc. And this is already here. In mid-future, AI can replace many jobs and can create many social problems.

These are real problems that people should talk about and work on. AI destroying us Terminator/Matrix style is not a problem right now and won’t be for many decades, if not centuries to come.
 
What I mean is that, we aren’t really on a path to building a general purpose AI. No one really knows how to even start doing it. There have been some promising developments on meta-learning (especially reinforcement based) but we are nowhere near even starting making a true AI.

Saying that, the implications of AI are already there. AI systems are biased (mostly cause the data used to train them come from humans, who are biased) and that can play a large role in their decisions, a role that might be harmful for women, black people etc. And this is already here. In mid-future, AI can replace many jobs and can create many social problems.

These are real problems that people should talk about and work on. AI destroying us Terminator/Matrix style is not a problem right now and won’t be for many decades, if not centuries to come.

None of the people I referenced are talking about a Terminator/Matrix apocalypse. Me neither. To “quote” Sam Harris, put an AI super-intelligence into the hands of America today, and it would be rational for China to nuke California. This type of AI c/would be a “winner take all” scenario. We don’t know how far away from creating this type of AI we are. It could be in five days, five years or decades/centuries. When we do have it, or if, it’ll likely be too late to think about the consequences, if we don’t prepare now. Something like this will change the world. As much as I’m informed on the matter, very few believe it to be impossible, rather most believe it to be very likely to happen.

You are right though and I agree, there are urgent matters to solve right now, even with the “primitive” versions of AI we have today. And IMO even they may be civilization ending problems.
 
None of the people I referenced are talking about a Terminator/Matrix apocalypse. Me neither. To “quote” Sam Harris, put an AI super-intelligence into the hands of America today, and it would be rational for China to nuke California. This type of AI c/would be a “winner take all” scenario. We don’t know how far away from creating this type of AI we are. It could be in five days, five years or decades/centuries. When we do have it, or if, it’ll likely be too late to think about the consequences, if we don’t prepare now. Something like this will change the world. As much as I’m informed on the matter, very few believe it to be impossible, rather most believe it to be very likely to happen.

You are right though and I agree, there are urgent matters to solve right now, even with the “primitive” versions of AI we have today. And IMO even they may be civilization ending problems.
Ok, this is a bit more agreeable.

We are definitely not 5 days or 5 years away though. 5 decades, if we are optimistic but to be fair, predicting what will happen more than 20 years in advance is a fool’s game.
 
Ok, this is a bit more agreeable.

We are definitely not 5 days or 5 years away though. 5 decades, if we are optimistic but to be fair, predicting what will happen more than 20 years in advance is a fool’s game.

Not sure what I said earlier that was non-agreeable compared to this. :D

We only had the internet 25-30 years, and that changed the world as it has. The rate of progress we're seeing today, not just in AI, I think it's impossible to say where we'll be in 10-20 years. It could be something completely unrecognisable. We're living in very exciting and at the same time very, very scary times
 
https://www.bbc.co.uk/news/science-environment-55133972

One of biology's biggest mysteries has been solved using artificial intelligence, experts have announced.
Predicting how a protein folds into a unique three-dimensional shape has puzzled scientists for half a century.
London-based AI lab, DeepMind, has largely cracked the problem, say the organisers of a scientific challenge.
A better understanding of protein shapes could play a pivotal role in the development of novel drugs to treat disease.
The advance by DeepMind is expected to accelerate research into a host of illnesses, including Covid-19.
Their program determined the shape of proteins at a level of accuracy comparable to expensive and time-consuming lab methods, they say.
Dr Andriy Kryshtafovych, from University of California (UC), Davis in the US, one of the panel of scientific adjudicators, described the achievement as "truly remarkable".
"Being able to investigate the shape of proteins quickly and accurately has the potential to revolutionise life sciences," he said.