|
Post by c-a-r-f-r-e-w on Sept 24, 2024 11:16:30 GMT
it would also be if interest if folk could comment as to how quickly they think AI is going to impact jobs and society, how quick we might see general intelligence etc.
If you are lurking, feel free to contribute to the poll!
|
|
|
Post by mercian on Sept 24, 2024 14:03:31 GMT
I think that AI will most easily replace relatively routine office jobs such as data analysis which was what I did at the end of my so-called career. It could also easily replace a lot of senior management (IMO) though I can't see that happening in practice for a long time. A friend of mine says that he experimented with getting it to write computer programs a couple of years ago and it came up with quite respectable code. There will be new jobs, such as being the chap who knows how to ask the right questions of AI. It could be the start of a new industrial revolution as in theory a lot of processes could become a lot more efficient such as the search for new antibiotics which I heard about not long ago.. There are also dangers of course. I'd be surprised if criminals aren't already using it. One obvious way would be to design scamming emails and fake websites. Also of course military applications. I expect there to be a pretty rapid (couple of decades?) radical change in society including in ways that can't be foreseen. Results will be mixed - the 18th and 19th century industrial revolution caused a huge upheaval and conditions for many were appalling for generations, but it led to the modern world which is very comfortable for the vast majority of people compared to say working in a cotton mill or match factory and no mod cons in the home. In the end we will adapt to life under our new masters. 😁
|
|
|
Post by bardin1 on Sept 24, 2024 17:42:27 GMT
I have read too many Philip K Dick books to get away from a feeling of unease about this.
My son is a data analyst for the BBC and I discuss this with him frequently. He's fine with it but i am still unconvinced that the risks are covered
|
|
|
Post by John Chanin on Sept 24, 2024 18:56:10 GMT
I have read too many Philip K Dick books to get away from a feeling of unease about this. My son is a data analyst for the BBC and I discuss this with him frequently. He's fine with it but i am still unconvinced that the risks are covered While most science fiction is dystopian, I am a great fan of Iain Banks, whose AIs are delightfully positive. Why, he thinks, should they be hostile? Self evidently any intelligence, even partial, with a huge memory and access to a large library, is going to outperform humans on anything routine. It may not necessarily be much good at new ideas but it will be at how to implement them. But it will struggle with emotions unless it becomes fully sentient, and maybe even then.
|
|
|
Post by bardin1 on Sept 26, 2024 8:55:14 GMT
I have read too many Philip K Dick books to get away from a feeling of unease about this. My son is a data analyst for the BBC and I discuss this with him frequently. He's fine with it but i am still unconvinced that the risks are covered While most science fiction is dystopian, I am a great fan of Iain Banks, whose AIs are delightfully positive. Why, he thinks, should they be hostile? Self evidently any intelligence, even partial, with a huge memory and access to a large library, is going to outperform humans on anything routine. It may not necessarily be much good at new ideas but it will be at how to implement them. But it will struggle with emotions unless it becomes fully sentient, and maybe even then. My concern is not that AI itself would be hostile but that it could be misused either through incompetence or malignant design to be hostile to innocent people. As far back as Asimov's groundbreaking robot laws in I! Robot the potnential has been recognised. ps Banks is great. Might watch the Crow Road adaptation again...
|
|
|
Post by birdseye on Oct 19, 2024 18:51:58 GMT
While most science fiction is dystopian, I am a great fan of Iain Banks, whose AIs are delightfully positive. Why, he thinks, should they be hostile? Self evidently any intelligence, even partial, with a huge memory and access to a large library, is going to outperform humans on anything routine. It may not necessarily be much good at new ideas but it will be at how to implement them. But it will struggle with emotions unless it becomes fully sentient, and maybe even then. My concern is not that AI itself would be hostile but that it could be misused either through incompetence or malignant design to be hostile to innocent people. As far back as Asimov's groundbreaking robot laws in I! Robot the potnential has been recognised. ps Banks is great. Might watch the Crow Road adaptation again... My concern is that the use of AI by malevolent people could easily destroy the availability of information for the general public. We have already seen how it can be used to produce completely convincing videos of politicians saying things that they never said. Easy to imagine what Trump could be coded to appear saying and since he already generates strong anti feelings, people would be very inclined to believe what they saw. www.bbc.co.uk/news/articles/cg33x9jm02ko illustrates In a world where young people in particular have moved away from journalist reporting in the press and on TV towards getting their news from unverfied internet feeds, AI generated false info could be devastating. Southport might not have been AI generated but would it have been any different if someone had decided to create the story using AI? And this of course is early days
|
|
|
Post by alec on Oct 23, 2024 13:17:24 GMT
Coming from the north east, up here we're somewhat more worried about Whay AI intelligence.
|
|