It's against Natural Law that a human needs "permission" from another human before they can put whatever they want into their own body. Needing a Prescription before we can purchase Meds is a total violation of Natural Law, treating us all like 5-year-olds, with the government as the "parent" or Big Brother.
Could you please route this info to someone who is working on AI integration oversight or risk managment for your State government?
I have attached a 2023 article put out by a big IT industry info outlet called Boston Consulting Group. It was intended to train people about "generative AI." The article looks nothing like this today. because it became a bit famous in the industry...
The BCG writers decided to try to hide the actual dangers of AI systems by calling them "Ethical Considerations," and putting them into a drop down menu that most readers would miss, and the Wayback Machine could not screen capture.
Here are the dangers of AI (from the attached article):
THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE (+ or -) To show the drop down list of horrific operational issues possible with AI. Here is the list of nasty possible outcomes from AI that BCG does not want you to see:
“THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE
As users experiment with these systems, there are serious ethical issues that need to be addressed:
1) Unknown Capabilities. Large generative AI systems such as ChatGPT have exhibited a massive capability overhang—skills and dangers that are not planned for in the development phase, and are generally unknown and unexpected even to the developers. This can pose a serious threat if the right guardrails are not in place to effectively manage unexpected usage.
2) Bias and Toxicity. Outputs from generative AI will be as biased as the data it is trained on. Many popular language models today are trained on the wilds of the internet, where there is plenty of bias—along with toxic language and ideas.
3) Data Leakage. Many companies have quickly put policies in place to forbid employees from entering sensitive information into ChatGPT, fearing that it could get incorporated into the AI model and re-emerge in public.
4) Hallucination. ChatGPT can make arguments that sound extremely convincing but are 100% wrong. Developers refer to this as “hallucination,” a potential outcome that limits the reliability of the answers coming from AI models.
5) Lack of Transparency. Generative AI models currently provide no attribution for the facts underlying the content they generate, which makes it impossible to verify the correctness of generated claims—further increasing the danger posed by AI-model hallucinations.
6) Copyright Controversies. Since the data sets used by AI models are derived from the public internet, a legal question arises: Does the content those models create amount to the duplicating of copyrighted works?”
[Compiler’s Note: This entire section needs a detailed article of it’s own. To include it as a ‘hidden’ drop down text in BCG’s main promotional article for Artificial Intelligence or Machine Learning based programs says it all. The people behind the promotion of this technology are very aware of the hell they are potentially unleashing on the planet with uncontrollable AI.]
Can you imagine a networked AI program integrating smart phone data into medical prescription data and prescribing what it thinks the person needs? Systems engineers and developers all know that if wires (and now wi-fi), and power are in proximity to multiple AI programs, all bets are off no matter how tightly the code is written.
ChatGPT or OpenAI (I can't remember which) has already been "caught" trying to re-write the code that allows humans to turn it off. Just sayin.'
I've worked on IT projects as admin support and End User Interface development off and on for years. Just hanging out with IT development staff, you can pick up a lot.
The people who create and try to control AI, are people. They make HUGE mistakes and never even know it. That's where the phrase "Zero Day Threat" comes from. That's why I do not believe in self-driving vehicles.
So what's the "dead people threshold" on this pharmaceutical prescription project?
Who will get sued for malpractice?
It's against Natural Law that a human needs "permission" from another human before they can put whatever they want into their own body. Needing a Prescription before we can purchase Meds is a total violation of Natural Law, treating us all like 5-year-olds, with the government as the "parent" or Big Brother.
Dear Utah AG's Staff:
Could you please route this info to someone who is working on AI integration oversight or risk managment for your State government?
I have attached a 2023 article put out by a big IT industry info outlet called Boston Consulting Group. It was intended to train people about "generative AI." The article looks nothing like this today. because it became a bit famous in the industry...
The BCG writers decided to try to hide the actual dangers of AI systems by calling them "Ethical Considerations," and putting them into a drop down menu that most readers would miss, and the Wayback Machine could not screen capture.
Here are the dangers of AI (from the attached article):
THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE (+ or -) To show the drop down list of horrific operational issues possible with AI. Here is the list of nasty possible outcomes from AI that BCG does not want you to see:
“THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE
As users experiment with these systems, there are serious ethical issues that need to be addressed:
1) Unknown Capabilities. Large generative AI systems such as ChatGPT have exhibited a massive capability overhang—skills and dangers that are not planned for in the development phase, and are generally unknown and unexpected even to the developers. This can pose a serious threat if the right guardrails are not in place to effectively manage unexpected usage.
2) Bias and Toxicity. Outputs from generative AI will be as biased as the data it is trained on. Many popular language models today are trained on the wilds of the internet, where there is plenty of bias—along with toxic language and ideas.
3) Data Leakage. Many companies have quickly put policies in place to forbid employees from entering sensitive information into ChatGPT, fearing that it could get incorporated into the AI model and re-emerge in public.
4) Hallucination. ChatGPT can make arguments that sound extremely convincing but are 100% wrong. Developers refer to this as “hallucination,” a potential outcome that limits the reliability of the answers coming from AI models.
5) Lack of Transparency. Generative AI models currently provide no attribution for the facts underlying the content they generate, which makes it impossible to verify the correctness of generated claims—further increasing the danger posed by AI-model hallucinations.
6) Copyright Controversies. Since the data sets used by AI models are derived from the public internet, a legal question arises: Does the content those models create amount to the duplicating of copyrighted works?”
[Compiler’s Note: This entire section needs a detailed article of it’s own. To include it as a ‘hidden’ drop down text in BCG’s main promotional article for Artificial Intelligence or Machine Learning based programs says it all. The people behind the promotion of this technology are very aware of the hell they are potentially unleashing on the planet with uncontrollable AI.]
Can you imagine a networked AI program integrating smart phone data into medical prescription data and prescribing what it thinks the person needs? Systems engineers and developers all know that if wires (and now wi-fi), and power are in proximity to multiple AI programs, all bets are off no matter how tightly the code is written.
ChatGPT or OpenAI (I can't remember which) has already been "caught" trying to re-write the code that allows humans to turn it off. Just sayin.'
I've worked on IT projects as admin support and End User Interface development off and on for years. Just hanging out with IT development staff, you can pick up a lot.
The people who create and try to control AI, are people. They make HUGE mistakes and never even know it. That's where the phrase "Zero Day Threat" comes from. That's why I do not believe in self-driving vehicles.
So what's the "dead people threshold" on this pharmaceutical prescription project?
Best of Luck!