Artificial Intelligence

Can’t believe I didn’t see this thread earlier. Fascinating.

I’m currently working with a major bank within their (fairly new) Robotics, ML and AI areas. Admittedly not very technical myself, more managing the strategy of what we are (should) trying to achieve.
 
http://www.mirror.co.uk/tech/googles-artificial-intelligence-becomes-worlds-11653989

An artificial intelligence program has become the world's best chess player in just a few hours - and it did it with almost no intervention from humans.

AlphaGo Zero, developed by Google subsidiary DeepMind, is a descendant of AlphaGo - the AI program that first conquered the Chinese board game Go in 2016.

After four hours of training, it took on the current world champion chess-playing program, Stockfish 8. Out of 100 games, it won 28 and drew the remaining 72.

Even more impressively, it achieved this feat almost completely autonomously.

That is seriously impressive. The exponential growth that has occured in IT is nothing short of amazing. I may have a very Limited understanding of the subject but Kurzweil might be proved right? Could we actually see a singularity where Machines are atleast as intelligent as humans by 2029? It was less than 100 years ago we made a plane, 20 years ago no social networks existed. The progress we are making is impressive and scary.
 
http://www.mirror.co.uk/tech/googles-artificial-intelligence-becomes-worlds-11653989

An artificial intelligence program has become the world's best chess player in just a few hours - and it did it with almost no intervention from humans.

AlphaGo Zero, developed by Google subsidiary DeepMind, is a descendant of AlphaGo - the AI program that first conquered the Chinese board game Go in 2016.

After four hours of training, it took on the current world champion chess-playing program, Stockfish 8. Out of 100 games, it won 28 and drew the remaining 72.

Even more impressively, it achieved this feat almost completely autonomously.

That is seriously impressive. The exponential growth that has occured in IT is nothing short of amazing. I may have a very Limited understanding of the subject but Kurzweil might be proved right? Could we actually see a singularity where Machines are atleast as intelligent as humans by 2029? It was less than 100 years ago we made a plane, 20 years ago no social networks existed. The progress we are making is impressive and scary.

Read an article on this, with comments from a chess analyst. He said the most interesting thing was that the AI had one loss, which happened due to it overestimating its position :lol:

Edit: definitely read it wrong when I skimmed it, Stockfish was the one that overestimated its position.
 
Last edited:
I may have skimmed the article too quickly, it probably said that one of its losses was due to that.

Even I'm not sure weather its 3 win for Stockfish, at the end as it says zero losses for both (in last column) or I've interpreted wrongly.
 
Terminator may become a reality.
scary shit.
7GZjoh
 
Even I'm not sure weather its 3 win for Stockfish, at the end as it says zero losses for both (in last column) or I've interpreted wrongly.

Top row are stats for Alphazero as white, bottom row are stats for Alphazero as black. Alphazero won 25 games as white, 3 games as black and the rest of the 72 games were drawn.
 
To read about these advances makes me anxious and somewhat depressed for our futures.

To ignore them makes me a fool.
 
We are fecked.





(4.5. seconds)


4.5 seconds seems incredibly long. Pretty sure I could train an algo in matlab that would pick him out in a fraction of a second. I can’t watch the video right now, is it down to the fact the arm scans the page?
 
4.5 seconds seems incredibly long. Pretty sure I could train an algo in matlab that would pick him out in a fraction of a second. I can’t watch the video right now, is it down to the fact the arm scans the page?
1) 4.5 seconds is not too long. I think that the resolution of the camera must be very high, in order to see everything on that page. There were hundreds of objects. If it is 8k by 8k pixels, it takes some time. Also, add the time to send the image to Google, and get the result from it, add the time to move the hand, it is actually quite good results.

2) MATLAB is shit. Why on Earth would someone use that junk piece of software?
 
1) 4.5 seconds is not too long. I think that the resolution of the camera must be very high, in order to see everything on that page. There were hundreds of objects. If it is 8k by 8k pixels, it takes some time. Also, add the time to send the image to Google, and get the result from it, add the time to move the hand, it is actually quite good results.

2) MATLAB is shit. Why on Earth would someone use that junk piece of software?

1) Resolution seems irrelevant if the goal is to find Wally in a standard Wally page, 8k is completely unnecessary.

2) That’s my point. You could train a basic image recognition library to pick Wally out of a 12/12 grid.
 
1) Resolution seems irrelevant if the goal is to find Wally in a standard Wally page, 8k is completely unnecessary.

2) That’s my point. You could train a basic image recognition library to pick Wally out of a 12/12 grid.
Assuming that they use a CNN-based approach, then resolution is not irrelevant. A big image takes more space and time than a small image. Essentially, providing that they are using a faster RCNN approach (like most object recognition systems) it will be something like:

- put the image on a CNN
- get the output of it and then give it to an another CNN which gives object proposals
- classify each object proposal (hundreds of them), in addition to bounding boxes
- apply a non-maximum suppression algorithm on all Wally objects

In reality, it is just a giant CNN which does that and is end to end.

Then, finally move the hand to the object it. If you don't know the size of the image and/or the distance between the hand and it, then even moving the hand is not straightforward.
 
Assuming that they use a CNN-based approach, then resolution is not irrelevant. A big image takes more space and time than a small image. Essentially, providing that they are using a faster RCNN approach (like most object recognition systems) it will be something like:

- put the image on a CNN
- get the output of it and then give it to an another CNN which gives object proposals
- classify each object proposal (hundreds of them), in addition to bounding boxes
- apply a non-maximum suppression algorithm on all Wally objects

In reality, it is just a giant CNN which does that and is end to end.

Then, finally move the hand to the object it. If you don't know the size of the image and/or the distance between the hand and it, then even moving the hand is not straightforward.

That’s my point. The preprocessing is negligible, the actual motor action of pointing to the candidate is what takes up the majority of the 4.5 seconds.
 
That’s my point. The preprocessing is negligible, the actual motor action of pointing to the candidate is what takes up the majority of the 4.5 seconds.
Nah. It takes more than a second to apply a faster RCNN in an image which has 2000 x 2000 pixels.
 
An AI writes for the Guardian - Sept 2020
Bumping this for now, despite the reply box prompt. I might change my mind later and bounce posts to a new thread. Humans eh.

Prompted by an experiment from The Guardian to see whether a language generator could write an opinion piece for the paper.

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The AI's opinion piece:
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
 
Bumping this for now, despite the reply box prompt. I might change my mind later and bounce posts to a new thread. Humans eh.

Prompted by an experiment from The Guardian to see whether a language generator could write an opinion piece for the paper.

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The AI's opinion piece:
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
Interesting!
 
Pretty much. With the consensus of people who know about this stuff apparently inclined towards a negative outcome.

Fecking soon too. 40 years or less. Then it's goodnight Vienna.

It is inevitable. You cannot un-invent AI. We are already well past the point of no return. The very best outcome will be one that is able to manage AI. But I am very far from optimistic that that will be the case.
 
Bumping this for now, despite the reply box prompt. I might change my mind later and bounce posts to a new thread. Humans eh.

Prompted by an experiment from The Guardian to see whether a language generator could write an opinion piece for the paper.

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The AI's opinion piece:
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

Interesting and cool but a bit selective
 
Bumping this for now, despite the reply box prompt. I might change my mind later and bounce posts to a new thread. Humans eh.

Prompted by an experiment from The Guardian to see whether a language generator could write an opinion piece for the paper.

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The AI's opinion piece:
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

So sinister.
 
staircase1.png


staircase2.png


This is both, extremely frightening and extremely fecking incredible. I say we go for it even if we do AI ourselves right out of existence.

Edit kind of reminds me of this:

 
Last edited:
Bumping this for now, despite the reply box prompt. I might change my mind later and bounce posts to a new thread. Humans eh.

Prompted by an experiment from The Guardian to see whether a language generator could write an opinion piece for the paper.

This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.

For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.

The AI's opinion piece:
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

I read that. Thought it was a bit shit, kind of disjointed like it was written by multiple authors and pasted together, which made sense when I got to the disclaimer at the end. I would have preferred to read the individual compositions instead.
 
staircase1.png


staircase2.png


This is both, extremely frightening and extremely fecking incredible. I say we go for it even if we do AI ourselves right out of existence.

Edit kind of reminds me of this:


Aren’t ants cleverer than chickens?
 
I was just joking mate, sorry if I offended you
Aye so was I. :lol:
I haven’t a clue about where all the species rank between bacteria and ourselves but the fact that artificial intelligence could go beyond anything we could ever comprehend is pretty amazing to think about. Hopefully we see it in not too distant future. Either way, it’ll will be the biggest thing that has ever happened to us imo.
 
Aye so was I. :lol:
I haven’t a clue about where all the species rank between bacteria and ourselves but the fact that artificial intelligence could go beyond anything we could ever comprehend is pretty amazing to think about. Hopefully we see it in not too distant future. Either way, it’ll will be the biggest thing that has ever happened to us imo.
I’ve no idea either, I just seen the steps and thought all chickens do is run around in shit and I’ve seen ants like little armies on Attenborough :lol: The chart is size-ist

as for AI, it amazes me, that written article comes across quite creepy, I’d need to look into it more but where the hell does it get its values and aspirations from. Quite bizarre to write that out