|
Post by c-a-r-f-r-e-w on Mar 16, 2023 19:38:04 GMT
The next challenge for ChatGPT? To save the NHS GPT has potential in diagnosis, clinical trials and analysing patient data – could it help our ailing health service?
…
There is no business where the accurate distillation of information is as important as in healthcare, where all too often it is a matter of life and death. So could GPT come to the rescue of an ailing NHS?
…
Can GPT relieve pressure on the NHS? I go on. “AI language models like GPT can be used to support the National Health Service in a variety of ways.” It lists several: automating routine tasks, analysing medical data, predicting patient outcomes, identifying potential areas of improvement in healthcare delivery, developing personalised treatment plans – all of which, it turns out, humans agree with.
That sounds exciting, I think. But GPT is cautious. “The NHS has certainly faced challenges in implementing and harnessing the benefits of new technology in the past,” it notes. That, too, is true.
…
Its potential is incredibly exciting but at the moment hard to quantify. “It has absolutely novel capabilities that are surprising even to experts,” says Alberto Favaro, director of healthcare at Faculty, the British AI firm that is currently working with NHS England to harness the technology. In particular, GPT’s creativity is helping the NHS to overcome privacy problems that prevent it mining its troves of data to design better treatments.
Rather than combing through real, often fragmented, records, which must first be sifted to eliminate traces of personal data, the NHS is running a pilot using GPT software to generate “synthetic” histories that reflect reality but are not of real patients. These “ghost patients” allow it to run simulations modelling the impact of different treatment methods, working out which are most effective, without breaching privacy laws. “It’s like you’re asking it to write a novel about the patient,” says Favaro of the software. “It generates the data.”
This creativity could also extend to clinical trials, he says. If GPT was used on the vast amount of existing clinical trial data, it could “suggest candidates for new drugs, or how new trials should be run”.
…
Across the Atlantic, Anil Gehi has seen the power of GPT not just in data mining, but in clinical settings. The US cardiologist tested it by describing the history, symptoms and complications of a real hospital patient, deploying several technical terms, and asked GPT-4 for a treatment plan. It responded with what turned out to be the very plan Gehi had himself initiated. When Gehi tried other situations, he was again impressed.
…
That hasn’t prevented doctors seeing their potential. Research shows LLMs are capable, for example, of sifting through medical records filled with jargon and abbreviations and extracting all the relevant information. In a recent study conducted by Massachusetts Institute of Technology, sentences such as “pt will dc vanco due to n/v” (the patient will discontinue the antibiotic vancomycin due to nausea/vomiting) were accurately deciphered 86 per cent of the time.
…
Favaro suggests it is in such low-risk “paperwork” situations that GPT will break into healthcare. “Anyone knows that when you go to see your GP they are typing much of the time,” he says. “It’s obvious all that could be replaced by software, so that the GP can stop and listen to the patient.”
Telegraph
|
|
|
Post by c-a-r-f-r-e-w on Mar 19, 2023 16:22:13 GMT
AI makes plagiarism harder to detect, argue academics – in paper written by chatbotLecturers say programs capable of writing competent student coursework threaten academic integrity www.theguardian.com/technology/2023/mar/19/ai-makes-plagiarism-harder-to-detect-argue-academics-in-paper-written-by-chatbotAn academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”.What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT.“We wanted to show that ChatGPT is writing at a very high level,” said Prof Debby Cotton, director of academic practice at Plymouth Marjon University, who pretended to be the paper’s lead author. “This is an arms race,” she said. “The technology is improving very fast and it’s going to be difficult for universities to outrun it.”…“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good,” he said. “The use of English and quality of grammar is often better than from a student.”…Nonetheless, he said academics could still look for clues that a student had used ChatGPT. Perhaps the biggest of these is that it does not properly understand academic referencing – a vital part of written university work – and often uses “suspect” references, or makes them up completely.…For years, universities have been trying to banish the plague of essay mills selling pre-written essays and other academic work to any students trying to cheat the system. But now academics suspect even the essay mills are using ChatGPT, and institutions admit they are racing to catch up with – and catch out – anyone passing off the popular chatbot’s work as their own.The Observer has spoken to a number of universities that say they are planning to expel students who are caught using the software.
|
|
|
Post by c-a-r-f-r-e-w on Mar 20, 2023 20:40:27 GMT
More on ChatGPT4, from Nature
“Andrew White, a chemical engineer at University of Rochester, has had privileged access to GPT-4 as a ‘red-teamer’: a person paid by OpenAI to test the platform to try and make it do something bad. He has had access to GPT-4 for the past six months, he says. “Early on in the process, it didn’t seem that different,” compared with previous iterations.
He put to the bot queries about what chemical reactions steps were needed to make a compound, predict the reaction yield, and choose a catalyst. “At first, I was actually not that impressed,” White says. “It was really surprising because it would look so realistic, but it would hallucinate an atom here. It would skip a step there,” he adds. But when as part of his red-team work he gave GPT-4 access to scientific papers, things changed dramatically. “It made us realize that these models maybe aren’t so great just alone. But when you start connecting them to the Internet to tools like a retrosynthesis planner, or a calculator, all of a sudden, new kinds of abilities emerge.”
And with those abilities come concerns. For instance, could GPT-4 allow dangerous chemicals to be made? With input from people such as White, OpenAI engineers fed back into their model to discourage GPT-4 from creating dangerous, illegal or damaging content, White says.
Fake facts Outputting false information is another problem. Luccioni says that models like GPT-4, which exist to predict the next word in a sentence, can’t be cured of coming up with fake facts — known as hallucinating. “You can’t rely on these kinds of models because there’s so much hallucination,” she says. And this remains a concern in the latest version, she says, although OpenAI says that it has improved safety in GPT-4.
Without access to the data used for training, OpenAI’s assurances about safety fall short for Luccioni. “You don’t know what the data is. So you can’t improve it. I mean, it’s just completely impossible to do science with a model like this,” she says.”
|
|
|
Post by birdseye on Mar 24, 2023 16:47:11 GMT
Interesting. As one who has taken to lunchtime naps , after9/10 hours at night, I think I would agree. Been tested for Alzheimers - negative so its not that. But it could be vascular dementia because once strong skills like mental arithmetic are no longer there.
|
|
|
Post by c-a-r-f-r-e-w on Mar 24, 2023 17:48:10 GMT
Interesting. As one who has taken to lunchtime naps , after9/10 hours at night, I think I would agree. Been tested for Alzheimers - negative so it’s not that. But it could be vascular dementia because once strong skills like mental arithmetic are no longer there. Well, it piqued my interest, because I have long taken naps, even before it was a thing. I used to take them in my study at boarding school in the afternoons. There’s research nowadays that suggests they can be beneficial, but I tend to try and keep an open mind, consider both potential pros and cons. Regarding losing some capabilities, they can decline from under-use, and it might be possible to recover some. When I started running a couple years ago, and I tried some sprints, I couldn’t sprint at all. I could only jog. I wasn’t tired, it just felt like the commands to move my legs faster weren’t getting through. Read a bit about it, seemed like maybe the neural connections had atrophied, and with further practice it came back to me surprisingly quickly. (I mean obviously I can’t sprint like when I was young, but it was better than I hoped tbh)
|
|
|
Post by c-a-r-f-r-e-w on Mar 28, 2023 19:43:11 GMT
A paper on the latest version of GPT4, suggests it’s showing signs of precursors of AI. They had early access to the unrestricted version and tested it…
For example: tool use. GPT4 can use tools with minimal instructions and no demonstrations, and make use of them appropriately. It’s an emergent capability, considered a bit of a milestone in AI. Standard chatGPT couldn’t do it.
Passes mock technical interviews on leetcode. Could potentially be hired as a software engineer. Solved a maths Olympiad problem that requires some ingenuity to solve
It can do Fermi questions, like how many golf balls does it take to fill a bathtub
A personal assistant that actually works. Can check your calendar, coordinate with others over email, book a dinner and message others with the details. An AI handyman. Helped them locate and say how to fix a leak in the bathroom. Mapping. If it can ask enough questions, it can build a mental map of a place it’s entering. Which will become rather useful once the AI becomes embodied Theory of Mind. It can build up a mental model of what others are thinking. It can tell the difference between what is actually true, and what a human being believes to be true. (Considered a key milestone on the road to possible consciousness)
|
|
|
Post by EmCat on Mar 29, 2023 14:33:31 GMT
Thart is interesting. I've sometimes wondered whether some of the advances are now simply a matter of scale, that allow things that weren't obvious before. For example, Conway's Game of Life seems a simple premise - a few rules are in place about whether a blob does or does not exist. Even the Wiki article about it still broadly treats it as an interesting curiosity, but nothing more. On the other hand, at a sufficient scale, and someone created a digital clock based on the game of life www.youtube.com/watch?v=3NDAZ5g4EuUIn the same way, the advances in AI (and the previous " AI Winters") may have been limited by the scales that were then achievable - the number of nodes possible in a neural net, for example (some early neural nets were limited simply because of the time taken to perform the calculations) Now that most general purpose computers are sufficiently fast to perform the calculations (and server based versions are now accessible via a web or other network link), then the advances do look to be coming thick and fast. Still, all the "AI is absolutely impossible because..." crowd will simply re-define AI so that they are technically correct, even while everyone else just uses them in the same way that they use automatic exposure on a smartphone's camera, rather than messing about with light meters. (As an aside on scale and capabilities, the Apollo Guidance Computer was once the epitome of a "small" computer, and yet it can now be simulated in software one can download for their PC www.ibiblio.org/apollo/)
|
|
|
Post by c-a-r-f-r-e-w on Mar 29, 2023 17:31:04 GMT
|
|
|
Post by c-a-r-f-r-e-w on Mar 29, 2023 17:44:11 GMT
Thart is interesting. I've sometimes wondered whether some of the advances are now simply a matter of scale, that allow things that weren't obvious before. For example, Conway's Game of Life seems a simple premise - a few rules are in place about whether a blob does or does not exist. Even the Wiki article about it still broadly treats it as an interesting curiosity, but nothing more. On the other hand, at a sufficient scale, and someone created a digital clock based on the game of life www.youtube.com/watch?v=3NDAZ5g4EuUIn the same way, the advances in AI (and the previous " AI Winters") may have been limited by the scales that were then achievable - the number of nodes possible in a neural net, for example (some early neural nets were limited simply because of the time taken to perform the calculations) Now that most general purpose computers are sufficiently fast to perform the calculations (and server based versions are now accessible via a web or other network link), then the advances do look to be coming thick and fast. Still, all the "AI is absolutely impossible because..." crowd will simply re-define AI so that they are technically correct, even while everyone else just uses them in the same way that they use automatic exposure on a smartphone's camera, rather than messing about with light meters. (As an aside on scale and capabilities, the Apollo Guidance Computer was once the epitome of a "small" computer, and yet it can now be simulated in software one can download for their PC www.ibiblio.org/apollo/)Yes, grow the network and speed it up, and new properties may emerge. There is also the issue of training time, and energy bottlenecks, costs etc.. There’s a new AI model from Stanford called Alpaca, which reputedly can already compete with chatGPT, and the training took just a couple of hours and cost $600. that’s a big and rapid drop, as the cost of training GPT3… was $5 million, and I think it took several weeks.
|
|
|
Post by mercian on Apr 9, 2023 20:19:12 GMT
c-a-r-f-r-e-w Thanks for the invitation As I said on the main thread, we have always had to be cautious with pictures or written sources on the internet, but now we seem to have reached the point where absolutely nothing can be trusted, particularly pictures and sources for written material. AI-generated stuff is not necessarily valueless, but there is enormous potential for bad actors to misuse it. I can see absolutely massive changes in society in the next 5-10 years, on a par with the industrial revolution but massively accelerated. I've been interested in AI for probably about 40 years, and even made some simple experiments myself, but it seems to have massively accelerated in the last 2 or 3 years. The first really advanced thing I came across was AlphaZero, a chess program which played against itself for 4 hours and then beat a version of the previously-strongest program in a match. It also beat the world Go and Shogi champions IIRC. I haven't heard of it for a couple of years. I wonder what happened. I imagine it could be repurposed for war. One problem is that there are various definitions of AI. Even a humble in-car SatNav could be considered AI by some definitions, if it has been set to find alternative routes if there is a potential hold-up. I'll leave it there for now as I don't just want to ramble on.
|
|
|
Post by mercian on Apr 10, 2023 20:00:22 GMT
|
|
|
Post by c-a-r-f-r-e-w on Apr 11, 2023 18:01:23 GMT
Key to intelligence is a refusal to follow the herd, researchers find Having little integration in animals’ social networks is linked to higher innovation
“It has long been argued that the key to intelligence and innovation lies in a refusal to follow the herd. Scientists have now proved this to be so, in a very literal sense, for goats, camels and wild horses.
Hoofed animals that are less well integrated into their herd are better at problem-solving than those that are deeply embedded in a social group with fellow members of their species, a study has found.
Being something of a loner has its disadvantages but also comes with benefits, according to the research, which found that more independent and less herd-dependent individuals were more “likely to interact with novel stimuli or situations” when they came across something new.
Those embedded in a herd were also more likely to ignore something out of the ordinary and follow the behaviour of those around them.”
…
“Animals that are less well integrated into their herd may need to work harder to find food, the study suggested. “Less integrated individuals may [be] more likely [to] overcome neophobia and deal with novel socioecological challenges to get a better share of resources.” This may explain why having “little integration in the social network was linked to higher innovation” as these animals may “need to rely on innovative behaviour to survive”.“
|
|
|
Post by c-a-r-f-r-e-w on Apr 11, 2023 18:08:37 GMT
c-a-r-f-r-e-w Thanks for the invitation As I said on the main thread, we have always had to be cautious with pictures or written sources on the internet, but now we seem to have reached the point where absolutely nothing can be trusted, particularly pictures and sources for written material. AI-generated stuff is not necessarily valueless, but there is enormous potential for bad actors to misuse it. I can see absolutely massive changes in society in the next 5-10 years, on a par with the industrial revolution but massively accelerated. Nice to see you here in the thread Pete! Yes, I’ve been having a - very cursory - look at how things might develop in the next few years for GPT: one possibility is that it could accelerate quickly for a few years but may then slow down - I’ll post about it in the next few days.
|
|
|
Post by mercian on Apr 11, 2023 22:41:16 GMT
Key to intelligence is a refusal to follow the herd, researchers findHaving little integration in animals’ social networks is linked to higher innovation “It has long been argued that the key to intelligence and innovation lies in a refusal to follow the herd. Scientists have now proved this to be so, in a very literal sense, for goats, camels and wild horses.
Hoofed animals that are less well integrated into their herd are better at problem-solving than those that are deeply embedded in a social group with fellow members of their species, a study has found.
Being something of a loner has its disadvantages but also comes with benefits, according to the research, which found that more independent and less herd-dependent individuals were more “likely to interact with novel stimuli or situations” when they came across something new.
Those embedded in a herd were also more likely to ignore something out of the ordinary and follow the behaviour of those around them.”
…
“Animals that are less well integrated into their herd may need to work harder to find food, the study suggested. “Less integrated individuals may [be] more likely [to] overcome neophobia and deal with novel socioecological challenges to get a better share of resources.” This may explain why having “little integration in the social network was linked to higher innovation” as these animals may “need to rely on innovative behaviour to survive”.“ That's me in a nutshell. If there is a 'herd' position then I usually try to find a way to oppose it. I probably overdo that sometimes. 😁
|
|
|
Post by c-a-r-f-r-e-w on Apr 12, 2023 1:30:58 GMT
Saw this in the Times and thought of your chess posts, mercian Has artificial intelligence ruined chess?“The Niemann-Carlsen scandal has precipitated a crisis in chess, forcing painful existential questions to be asked. Has technology rendered the game pointless? In the age of AI, can cheating be wiped out? Are superpowerful computers taking the joy out of the game? And what does the future of chess look like?
… “ During the pandemic, when over-the-board tournaments were no longer possible, a rash of cheating scandals broke out. More and more people were playing — thanks in no small part to the interest in the game sparked by the hit Netflix series The Queen’s Gambit. Prize money for online events was rising to as much as $1 million, and with it the temptation to break the rules by consulting a computer. The Danish-Scottish chess grandmaster and influential blogger Jacob Aagaard describes the situation as “a real cheating crisis”.
Getting away with cheating over the board is obviously trickier, but ever since personal computers first came on the market the elite game has been tarnished with instances of players either smuggling them on their person or darting out of competitions to consult them in lavatories”
Sunday Times
|
|
|
Post by leftieliberal on Apr 12, 2023 8:27:59 GMT
Saw this in the Times and thought of your chess posts, mercian Has artificial intelligence ruined chess?“The Niemann-Carlsen scandal has precipitated a crisis in chess, forcing painful existential questions to be asked. Has technology rendered the game pointless? In the age of AI, can cheating be wiped out? Are superpowerful computers taking the joy out of the game? And what does the future of chess look like?
… “ During the pandemic, when over-the-board tournaments were no longer possible, a rash of cheating scandals broke out. More and more people were playing — thanks in no small part to the interest in the game sparked by the hit Netflix series The Queen’s Gambit. Prize money for online events was rising to as much as $1 million, and with it the temptation to break the rules by consulting a computer. The Danish-Scottish chess grandmaster and influential blogger Jacob Aagaard describes the situation as “a real cheating crisis”.
Getting away with cheating over the board is obviously trickier, but ever since personal computers first came on the market the elite game has been tarnished with instances of players either smuggling them on their person or darting out of competitions to consult them in lavatories”
Sunday Times Cheating is a real problem when the stakes get high enough. One solution is a life-time ban for those caught (e.g. Niemann who has admitted to cheating twice). Of course anyone who is good enough to be playing at an elite level is also good enough to be able to remember long sequences of moves. When I played chess more than 50 years ago (not well, I should add) I knew county-standard players who could memorise whole lines out of Modern Chess Openings. When you are playing against elite players you can rely on them not making blunders in the openings so the number of variations you have to remember is much smaller than the number of possible variations.
|
|
|
Post by c-a-r-f-r-e-w on Apr 12, 2023 13:41:56 GMT
Saw this in the Times and thought of your chess posts, mercian Has artificial intelligence ruined chess?“The Niemann-Carlsen scandal has precipitated a crisis in chess, forcing painful existential questions to be asked. Has technology rendered the game pointless? In the age of AI, can cheating be wiped out? Are superpowerful computers taking the joy out of the game? And what does the future of chess look like?
… “ During the pandemic, when over-the-board tournaments were no longer possible, a rash of cheating scandals broke out. More and more people were playing — thanks in no small part to the interest in the game sparked by the hit Netflix series The Queen’s Gambit. Prize money for online events was rising to as much as $1 million, and with it the temptation to break the rules by consulting a computer. The Danish-Scottish chess grandmaster and influential blogger Jacob Aagaard describes the situation as “a real cheating crisis”.
Getting away with cheating over the board is obviously trickier, but ever since personal computers first came on the market the elite game has been tarnished with instances of players either smuggling them on their person or darting out of competitions to consult them in lavatories”
Sunday Times Cheating is a real problem when the stakes get high enough. One solution is a life-time ban for those caught (e.g. Niemann who has admitted to cheating twice). Of course anyone who is good enough to be playing at an elite level is also good enough to be able to remember long sequences of moves. When I played chess more than 50 years ago (not well, I should add) I knew county-standard players who could memorise whole lines out of Modern Chess Openings. When you are playing against elite players you can rely on them not making blunders in the openings so the number of variations you have to remember is much smaller than the number of possible variations. Not sure, but was it Carlsen who had a thing of throwing in strange moves to take things off piste? Anyways, in the Times article it says that things are tilting more towards blitz chess to try and limit the damage
|
|
|
Post by c-a-r-f-r-e-w on Apr 12, 2023 13:45:34 GMT
Key to intelligence is a refusal to follow the herd, researchers findHaving little integration in animals’ social networks is linked to higher innovation “It has long been argued that the key to intelligence and innovation lies in a refusal to follow the herd. Scientists have now proved this to be so, in a very literal sense, for goats, camels and wild horses.
Hoofed animals that are less well integrated into their herd are better at problem-solving than those that are deeply embedded in a social group with fellow members of their species, a study has found.
Being something of a loner has its disadvantages but also comes with benefits, according to the research, which found that more independent and less herd-dependent individuals were more “likely to interact with novel stimuli or situations” when they came across something new.
Those embedded in a herd were also more likely to ignore something out of the ordinary and follow the behaviour of those around them.”
…
“Animals that are less well integrated into their herd may need to work harder to find food, the study suggested. “Less integrated individuals may [be] more likely [to] overcome neophobia and deal with novel socioecological challenges to get a better share of resources.” This may explain why having “little integration in the social network was linked to higher innovation” as these animals may “need to rely on innovative behaviour to survive”.“ That's me in a nutshell. If there is a 'herd' position then I usually try to find a way to oppose it. I probably overdo that sometimes. 😁 In the past I tended to avoid boards that had declined as they could at times seem a bit Mad Max, but it’s been interesting to see that you seem to maybe get a higher proportion of the more unaffiliated, self-contained sorts.
|
|
|
Post by c-a-r-f-r-e-w on Apr 13, 2023 15:51:25 GMT
China plots electric car invasion of Europe with in-car fridges and facial recognition Zeekr's new model will cost almost half as much as a Tesla
…
However, stringent European safety demands could add to the price when it comes to the market in the West.
Up to 30 new electric vehicle brands are eyeing up the UK car market, most of them Chinese.
Britain is an attractive market because it will be one of the first countries in the world to ban fossil fuel engines, with the sale of new non-hybrid petrol and diesel cars outlawed from 2030.
Companies such as BYD and Ora already have agreements in place with UK dealers. They will be joined by a raft of other car makers including Chery, Dongfeng and Haval.
…
Earlier this month HiPhi, which was co-founded by a former executive at Jaguar Land Rover (JLR), said it was plotting a European launch for its motors and considering proposals to bring the brand to the UK.
The Chinese company’s cars feature advanced headlights that can be used to project films, as well as dispensers that release different scents according to the mood of the driver.
|
|
|
Post by leftieliberal on Apr 14, 2023 10:35:10 GMT
Quanta article on high-dimensional computing: www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413/One of the chief criticisms of artificial neural networks is that how they reach their answers is not clear; it is all in the network weights, but there is no easy way to go from network weights to a set of logical statements that humans can understand. Hyperdimensional computing uses vectors in a high-dimensional space (tens of thousands of orthogonal vectors). Because of orthogonality the process can be reversed to determine how a particular result was achieved. In practice the vectors are near-orthogonal rather than exactly orthogonal but the errors arising can be kept small by using a sufficiently large number of dimensions.
|
|
|
Post by birdseye on Apr 19, 2023 20:11:51 GMT
Quanta article on high-dimensional computing: www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413/One of the chief criticisms of artificial neural networks is that how they reach their answers is not clear; it is all in the network weights, but there is no easy way to go from network weights to a set of logical statements that humans can understand. Hyperdimensional computing uses vectors in a high-dimensional space (tens of thousands of orthogonal vectors). Because of orthogonality the process can be reversed to determine how a particular result was achieved. In practice the vectors are near-orthogonal rather than exactly orthogonal but the errors arising can be kept small by using a sufficiently large number of dimensions. Thanks for the link - I had forgotten Quanta. An easy way to waste a few hours when you should be doing chores!
I am not sure why reversal matters. Can we really reverse our own thought processes - some we can of course but many we cant. Intuition, those mental jumps, is what makes intelligence rather than simple processing.
|
|
|
Post by Teresita Im on Apr 22, 2023 15:46:22 GMT
Attention proboards.com owner! Are you struggling to rank your website on the first page of Google? bit.ly/3o0fecROur off-page high-quality links SEO services can help boost your website's search engine rankings and drive more traffic to your site. bit.ly/3o0fecROur team of SEO experts will build high-quality, relevant backlinks from reputable websites, increasing your website's authority and credibility. Don't let your competitors outrank you any longer. Contact us today to take your website's SEO to the next level! bit.ly/3o0fecRCall or Text us at +17373095254 for more info. See you on the other side, Teresita
|
|
|
Post by birdseye on Apr 23, 2023 7:17:09 GMT
Interesting. As one who has taken to lunchtime naps , after9/10 hours at night, I think I would agree. Been tested for Alzheimers - negative so it’s not that. But it could be vascular dementia because once strong skills like mental arithmetic are no longer there. Well, it piqued my interest, because I have long taken naps, even before it was a thing. I used to take them in my study at boarding school in the afternoons. There’s research nowadays that suggests they can be beneficial, but I tend to try and keep an open mind, consider both potential pros and cons. Regarding losing some capabilities, they can decline from under-use, and it might be possible to recover some. When I started running a couple years ago, and I tried some sprints, I couldn’t sprint at all. I could only jog. I wasn’t tired, it just felt like the commands to move my legs faster weren’t getting through. Read a bit about it, seemed like maybe the neural connections had atrophied, and with further practice it came back to me surprisingly quickly. (I mean obviously I can’t sprint like when I was young, but it was better than I hoped tbh) I have tried to take the same approach to mental exercise by doing units of the OU Maths degree course, but sadly they have become too expensive to continue. Not sure that wordle and nerdle count as substitutes!
|
|
|
Post by c-a-r-f-r-e-w on Apr 23, 2023 18:10:04 GMT
Well, it piqued my interest, because I have long taken naps, even before it was a thing. I used to take them in my study at boarding school in the afternoons. There’s research nowadays that suggests they can be beneficial, but I tend to try and keep an open mind, consider both potential pros and cons. Regarding losing some capabilities, they can decline from under-use, and it might be possible to recover some. When I started running a couple years ago, and I tried some sprints, I couldn’t sprint at all. I could only jog. I wasn’t tired, it just felt like the commands to move my legs faster weren’t getting through. Read a bit about it, seemed like maybe the neural connections had atrophied, and with further practice it came back to me surprisingly quickly. (I mean obviously I can’t sprint like when I was young, but it was better than I hoped tbh) I have tried to take the same approach to mental exercise by doing units of the OU Maths degree course, but sadly they have become too expensive to continue. Not sure that wordle and nerdle count as substitutes! Yes, in my view stuff like that should remain very affordable. There are resources online for maths but haven’t checked them out yet - thinking I might revisit some bits of maths myself.
|
|
|
Post by c-a-r-f-r-e-w on Apr 24, 2023 21:11:27 GMT
AI taskforce given £100 million to develop BritGPT Government-backed chatbot could be similar to ChatGPT and be used by the NHS
A British version of ChatGPT that would compete with versions from big tech and be used by the NHS and other public bodies has received a £100 million investment.
The government has pledged the money to a taskforce modelled on the Covid-19 vaccine team to start developing the artificial intelligence.
ChatGPT and similar AI, also called large language or foundation models, can give users access to huge amounts of data and knowledge through an easy chat interface.
Their ability to write, summarise and interpret large bodies of information is seen as revolutionary. paying for AI-written pupil reports, says tech company
…
Experts have been urging the government to start developing its own artificial intelligence so that public bodies such as the NHS and government departments can use it.
The AI Council, an independent expert committee, had written to ministers urging a national investment, a call echoed by Sir Tony Blair and Lord Hague of Richmond in a recent report.
Some believe the health service could use AI to develop new drugs and advance diagnoses.
…
A Foundation Model taskforce was set up last month to bring together leading experts in the field to develop “sovereign AI”.
It has now been given £100 million by the Department of Science, Innovation and Technology to invest in infrastructure and procurement, and to start pilot projects in the next six months.
The investment comes on top of about £900 million for a new “exascale” supercomputer that was announced in the budget.
….
Other experts were more cautious. Alan Woodward, professor of cybersecurity at the University of Surrey, said…
…
“Sharing sensitive data with these LLM’s [large language models] is something that concerns me. The technology should not get a free pass for access to sensitive personal information that represents an arbitrary interference with our privacy.
“My concern is deepened by my experience so far of LLM that they are quite likely to display sensitive information if they have access to it.”
|
|
|
Post by leftieliberal on Apr 28, 2023 12:09:11 GMT
|
|
|
Post by leftieliberal on May 1, 2023 17:08:11 GMT
SpaceX rocket took 40 seconds to respond to self-destruct command: gizmodo.com/spacex-struggled-to-destroy-its-failing-starship-rocket-1850390877What with also demolishing the launch pad, Musk's rocket has lots of problems. As Dick Feynman famously said in the Challenger shuttle enquiry "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled".
|
|
alurqa
Member
Freiburg im Breisgau's flag
Posts: 781
|
Post by alurqa on May 7, 2023 14:33:40 GMT
I downloaded www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/datasets/localauthoritiesinenglandtable2which is an Excel spreadsheet. I exported the data as a CSV file, but using TABs to separate fields. I then wrote this awk program (see attachment) to sum up different age groups from this forcast data. table2.awk (1.24 KB) This is the result of my running my program against the OPCS data above: Under 25s: 2018 16753941 29.9 2019 16776377 29.7 2020 16813148 29.6 2021 16834858 29.5 2022 16848252 29.4 2023 16870030 29.3 2024 16889998 29.2 2025 16924139 29.1 2026 16972311 29.1 2027 17035852 29.1 2028 17087482 29 2029 17120538 29 2030 17142851 28.9 2031 17137740 28.8 2032 17120063 28.7 2033 17081837 28.5 2034 17051326 28.4 2035 17012864 28.2 2036 16965297 28 2037 16906482 27.9 2038 16873067 27.7 2039 16858714 27.6 2040 16852318 27.5 2041 16845790 27.4 2042 16859161 27.3 2043 16894313 27.3 25 - 64: 2018 29043984 51.8 2019 29211100 51.8 2020 29359989 51.8 2021 29485705 51.7 2022 29586828 51.6 2023 29645992 51.5 2024 29685059 51.3 2025 29686746 51.1 2026 29647334 50.8 2027 29569303 50.5 2028 29488441 50.1 2029 29411899 49.8 2030 29341940 49.5 2031 29305680 49.3 2032 29285951 49.1 2033 29301699 49 2034 29313863 48.8 2035 29355575 48.7 2036 29394931 48.6 2037 29474398 48.6 2038 29563253 48.6 2039 29666750 48.6 2040 29778477 48.6 2041 29906550 48.7 2042 30029391 48.7 2043 30122817 48.7 65+: 2018 10179253 18.1 2019 10355595 18.3 2020 10505333 18.5 2021 10669007 18.7 2022 10847025 18.9 2023 11041499 19.1 2024 11241833 19.4 2025 11449350 19.7 2026 11677599 20 2027 11922568 20.3 2028 12175728 20.7 2029 12437059 21 2030 12697007 21.4 2031 12945682 21.7 2032 13186211 22.1 2033 13408469 22.4 2034 13623803 22.7 2035 13815475 22.9 2036 14017587 23.2 2037 14190801 23.4 2038 14329933 23.5 2039 14436341 23.6 2040 14527073 23.7 2041 14601625 23.7 2042 14661072 23.8 2043 14726968 23.8
What is striking is over the next 20 years with the current assumptions the number of:
* under 25s will increase from 16.8m to 16.9m; * 25 - 64s will increase from 29.0m to 30.1m; * the 65 and overs will increase from 10.1m to 14.7m.
So there is an increase in all three age groups, BUT, in percentages of the total population:
* under 25s will decrease from 29.9% to 27.3% (a drop of 2.6%);
* 25 - 64s will decrease from 51.8% to 48.7% (a drop of 3.1%); * the 65 and overs will increase from 18.1% to 23.8% (a rise of 5.7%).
So in 2043 3.1% fewer working people will have to pay 5.7% more pensioners. Hmm.
|
|
|
Post by c-a-r-f-r-e-w on May 9, 2023 15:27:07 GMT
SpaceX rocket took 40 seconds to respond to self-destruct command: gizmodo.com/spacex-struggled-to-destroy-its-failing-starship-rocket-1850390877What with also demolishing the launch pad, Musk's rocket has lots of problems. As Dick Feynman famously said in the Challenger shuttle enquiry "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled". SpaceX have shown an ability to progress quickly so I guess we will see. I do tend to think the limiting factor ultimately is how well they survive re-entry, how well the heat shield works etc.
|
|
|
Post by mercian on May 12, 2023 21:29:22 GMT
I downloaded www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/datasets/localauthoritiesinenglandtable2which is an Excel spreadsheet. I exported the data as a CSV file, but using TABs to separate fields. I then wrote this awk program (see attachment) to sum up different age groups from this forcast data. View AttachmentThis is the result of my running my program against the OPCS data above: Under 25s: 2018 16753941 29.9 2019 16776377 29.7 2020 16813148 29.6 2021 16834858 29.5 2022 16848252 29.4 2023 16870030 29.3 2024 16889998 29.2 2025 16924139 29.1 2026 16972311 29.1 2027 17035852 29.1 2028 17087482 29 2029 17120538 29 2030 17142851 28.9 2031 17137740 28.8 2032 17120063 28.7 2033 17081837 28.5 2034 17051326 28.4 2035 17012864 28.2 2036 16965297 28 2037 16906482 27.9 2038 16873067 27.7 2039 16858714 27.6 2040 16852318 27.5 2041 16845790 27.4 2042 16859161 27.3 2043 16894313 27.3 25 - 64: 2018 29043984 51.8 2019 29211100 51.8 2020 29359989 51.8 2021 29485705 51.7 2022 29586828 51.6 2023 29645992 51.5 2024 29685059 51.3 2025 29686746 51.1 2026 29647334 50.8 2027 29569303 50.5 2028 29488441 50.1 2029 29411899 49.8 2030 29341940 49.5 2031 29305680 49.3 2032 29285951 49.1 2033 29301699 49 2034 29313863 48.8 2035 29355575 48.7 2036 29394931 48.6 2037 29474398 48.6 2038 29563253 48.6 2039 29666750 48.6 2040 29778477 48.6 2041 29906550 48.7 2042 30029391 48.7 2043 30122817 48.7 65+: 2018 10179253 18.1 2019 10355595 18.3 2020 10505333 18.5 2021 10669007 18.7 2022 10847025 18.9 2023 11041499 19.1 2024 11241833 19.4 2025 11449350 19.7 2026 11677599 20 2027 11922568 20.3 2028 12175728 20.7 2029 12437059 21 2030 12697007 21.4 2031 12945682 21.7 2032 13186211 22.1 2033 13408469 22.4 2034 13623803 22.7 2035 13815475 22.9 2036 14017587 23.2 2037 14190801 23.4 2038 14329933 23.5 2039 14436341 23.6 2040 14527073 23.7 2041 14601625 23.7 2042 14661072 23.8 2043 14726968 23.8
What is striking is over the next 20 years with the current assumptions the number of:
* under 25s will increase from 16.8m to 16.9m; * 25 - 64s will increase from 29.0m to 30.1m; * the 65 and overs will increase from 10.1m to 14.7m.
So there is an increase in all three age groups, BUT, in percentages of the total population:
* under 25s will decrease from 29.9% to 27.3% (a drop of 2.6%);
* 25 - 64s will decrease from 51.8% to 48.7% (a drop of 3.1%); * the 65 and overs will increase from 18.1% to 23.8% (a rise of 5.7%).
So in 2043 3.1% fewer working people will have to pay 5.7% more pensioners. Hmm.
Interesting results. There are lots of variables which could affect the forecasts though. For instance government estimates of immigration tend to be low, especially illegal immigration for obvious reasons, and immigrants tend to be young. The government is already talking about increasing the state pension age again. Also perhaps a citizen's wage will come in as there could be mass unemployment as a result of AI. I was watching a couple of chaps doing a really good job of clearing a neighbour's gutters and cleaning flat roof etc today and thought that that sort of job would be one of the last to go. Office drones doing repetitive tasks will be phased out pretty quickly I think. Apart from in the Civil Service and NHS obviously.
|
|