It sounds like artificial intelligence is still largely under researched by the general public, but in my opinion, AI usage by corporate and state actors have interest in advancing their usage for varying interests. I don’t mind the usage of AI, we as a society will adapt for employment and governing* purposes.
However, the concern of AI being used for resume reviews concerns me about the previously convicted, work gaps, and other things like age, school rankings, etc. I feel like hard cutoffs to such groups would leave behind those who already find employment hard to come by.
My other concern is my biggest fear. “Hallucination” will not be acknowledged as much as it needs to be as acceptance rises of AI, my assumption as time goes on, paired with the risks of over relying on the technology and, the contradictive nature of our views could be taken advantage of and LLMs could be used influence views to shift them a certain way, as pointed out potentially with the twitter/X LLM this year
What accountability happens to those to alter models to fit their narratives(to me fines aren’t enough). What if a LLM is altered to hurt the image of a group or demographic or promote a negative narratives? What if ad generated prompts are introduced, how would small businesses compare to the brand-bias of large business? How do normal citizens and smaller interests get skin the game to counteract the largely invested?
I feel like AI is like one of those pharmaceutical commercials, a prescription for the “narcotizing dysfunction”(your class) caused by the constant flow of information. Something to help stimmy the flow of information by being able to process the world with easy-to-use prompts in LLMs. But the symptoms have yet to fully reveal themselves and aren’t presented well enough to question if they are worth the costs in the context of broad society like how they are in the commercials.
Those are some compelling points. It is especially interesting to wonder that as AI becomes “normal technology” what that will do to the pro-regulation momentum we see here. If these sorts of harms become seen as widespread and personal (not just directed at groups) as in the case with smartphones — also a now “normal technology” — it might very well continue. As in the case with smartphones and social media, it may also very well have little impact on the tech itself.
It sounds like artificial intelligence is still largely under researched by the general public, but in my opinion, AI usage by corporate and state actors have interest in advancing their usage for varying interests. I don’t mind the usage of AI, we as a society will adapt for employment and governing* purposes.
However, the concern of AI being used for resume reviews concerns me about the previously convicted, work gaps, and other things like age, school rankings, etc. I feel like hard cutoffs to such groups would leave behind those who already find employment hard to come by.
My other concern is my biggest fear. “Hallucination” will not be acknowledged as much as it needs to be as acceptance rises of AI, my assumption as time goes on, paired with the risks of over relying on the technology and, the contradictive nature of our views could be taken advantage of and LLMs could be used influence views to shift them a certain way, as pointed out potentially with the twitter/X LLM this year
What accountability happens to those to alter models to fit their narratives(to me fines aren’t enough). What if a LLM is altered to hurt the image of a group or demographic or promote a negative narratives? What if ad generated prompts are introduced, how would small businesses compare to the brand-bias of large business? How do normal citizens and smaller interests get skin the game to counteract the largely invested?
I feel like AI is like one of those pharmaceutical commercials, a prescription for the “narcotizing dysfunction”(your class) caused by the constant flow of information. Something to help stimmy the flow of information by being able to process the world with easy-to-use prompts in LLMs. But the symptoms have yet to fully reveal themselves and aren’t presented well enough to question if they are worth the costs in the context of broad society like how they are in the commercials.
Those are some compelling points. It is especially interesting to wonder that as AI becomes “normal technology” what that will do to the pro-regulation momentum we see here. If these sorts of harms become seen as widespread and personal (not just directed at groups) as in the case with smartphones — also a now “normal technology” — it might very well continue. As in the case with smartphones and social media, it may also very well have little impact on the tech itself.