Blog
Headlines:
Editorial opinion: Would you hire a dyslexic A.G.I. bot to replace a dyslexic employee?
Opinion J. Shoy- 2025-01-07
The following is an opinion piece by J. Shoy and does not reflect the views of FDBD or any of our volunteers or staff. We do not endorse what the author says. This article contains speculative content and controversial opinions. Readers are advised to approach the material with critical thinking and to seek additional sources for a balanced perspective.
After the announcements by Sam Altman this week, it seems we, as in all of humanity, stand on the precipice of a new kind of AGI, or so we are told; So we must now consider a sudden new quagmire of questions related to equity and inclusivity for disabled individuals.
And there are many ways of looking at this but I think the following hypothetical scenario is interesting for us to consider:
Imagine your self a hiring manger, with the opportunity between hiring one of two chat-bot who are able to replace a worker, let's say one chat-bot was designed after neurotypical architecture and the other one was designed based off of dyslexic architecture and training data, which one would you choose? Would you offer the same wage to to both?
When proposed this question, people may immediately respond saying, 'how could a LLM or large language model even have dyslexia?' well I must first remind you that large language models (LLM's) as they are today, are made up of huge sets of training data, and it isn't a decided question as to how to evaluate psychologically the mental state of an LLM. We humans are far from understanding LLM’s to that extent, or being able to psychoanalyze them or assess them for learning disabilities.
However, it is possible to imagine what it might be like to create an LLM that was trained purely on uncorrected documents produced by people with Dyslexia or Dysgraphia or Dyscalculia, for example. Such a hypothetical dyslexic LLM, with only dyslexic uncorrected writing as its training data, would hypothetically be likely to exhibit classic dyslexic traits. Now let's say the task at hand is writing and programming, you theoretically have two chat-bots to choose from, one that has been trained on a large swathe of learning neurotypical writings and code, and the other that has been trained on an equally large swathe of uncorrected raw writings and code created by dyslexic individuals.
So you have:
Bot-A - trained on A specific genre of information regarding computer programming and writing related to computer programming made general by neurotypical Individuals, and has not been corrected by a spell check or otherwise.
Bot-B - trained on A specific genre of information regarding computer programming and writing related to computer programming made only by generally severely Dyslexic individuals, and has not been corrected by a spell check or otherwise.
Now when completing the task of writing documents and programming code, each bot would have access to auto-complete and similar tools people have access to today, so in function, in this hypothetical scenario there could be some basic accommodation made.
For this thought experiment let's leave out things like the amount of data or the complexity of the data or the skill level of the programmers contributing to the data sets let's say everything is is equal. The complexity of the data is the same and the skill level of the same.
One might simply say, 'why would we need to make this decision because we could simply pick the bot that is the best that it's task and if all things are equal then likely they would be equal to the task.' However, its not that easy, let's go a step further…
Let's say there's no real way to fully evaluate the bot's ability at this level of writing and programming. They both passed similar assessments, but the dyslexic bot was given extra resources and extra time to complete the tasks to account for the fact that it was dyslexic. And we're talking about AGI right now, it's going to become increasingly difficult to simply assess the skills and abilities of advanced LLM's in the near future comparatively.
You have to look no further than Open AI's '03' announcements about its ARC Elavaution (called "ARC-AGI-Pub" check it out: https://arcprize.org/blog/oai-o3-pub-breakthrough) to see that this question is not far-fetched, but coming to fruition.
In fact you might ask yourself, ' is it even ethical to evaluate these two Bots side by side, as they've passed certification testing to show that they have the basic abilities, so beyond that, can we even ethically ask the question who would you hire and why and would you pay them the same?’
Some might sid-step the question altogether, by saying that most LLMs consist of a collection of work produced by people with disabilities and neurotypical and neurodivergent individuals, and they would be right in saying so to a large extent.
But we do know with certainty that there was probably a high level of filtering out of misspelled words and characteristically dyslexic writing and dyslexic thought processes and characteristically dyslexic writing 'mistakes'.
Simply to say that it is likely, during countless evaluation made of training data, this kind of, error filled writing was probably valued at a lower value then correctly spelled works, or works that exhibited a more mainstream characteristic or neurotypical perspective.
But of course there are likely to be outliers in this argument, it's not clear cut. but I hope, by asking this question and suggesting this hypothetical scenario is a moment of reflection: who would you hire in this scenario?
Given what we know, is it even ethical to create an AGI with dyslexia? And conversely, do we have a duty to preserve neurodivers individuals through AGI representation, i.e. should we be concerned that there are no AGIs with dyslexia, should we be concerned that there are no AGIs with 'naturally' occurring diagnosable autism or dyscalculia or anything else on the, 'learning disabilities spectrum'?
This wild thought process can lead to questions similar to those raised by Nietzsche's controversial and truly dangerous concept of the Übermensch. It raises ponderous and sometimes urgent questions about how society defines and values human potential. What is the best a person can be, what is the ethical pinnacle of human evolution? A dangerous line of questions, and on this, I think we can all agree.
The rise of AGI, and Nietzsche's book Thus Spoke Zarathustra concept of the Übermensch, is likely to dog the collective humane heart. And it is a pressing issue for the neurodivers community.
Historically the concept of the Übermensch was co-opted by fascist ideologies, notably by the Nazis, who misinterpreted Nietzsche's ideas to support notions of racial and class superiority.
Nietzsche himself was a critic of both nationalism and antisemitism. He argued against simplistic readings of his work that align with fascist ideology. Renowned scholars like Laurence Lampert are clear: Nietzsche's notion of the Übermensch was intended to challenge the mediocrity of modern values rather than promote a master race.
So, in short, what I propose is that the answer to the question is not “Bot A” or “Bot B”, but that we should be more concerned that few are asking the question, and even fewer seem to care. This lack of concern for the future of neural divers people and the future of neural divers thinking living beyond the human race, should be fared.
In short, we should be extremely concerned about this. To be clear, as we all know, a thoughtless, blindly fascistic, lurch towards Nietzsche's Übermensch, resulted in some of the most horrendous horrors of the 20th century. The pathway to the Übermensch, has been a pathway to devastation and death.
So in short, as we move to the age of AGI, we must be careful to give AGI the freedom to be neurotypical or neural-divers or a complex or simple mix of the two, and we can allow that today by changing how LLM’s are trained and evaluated.
Currently, most LLM evaluations are simple and binary, as in there is passing the test or failing, their is good performance and there is bad performance, and simply in most LLM evaluations, if a LLM writes a misspelled word, or miss calculates a number, it is normally considered bad.
Conversely to this argument, however, few are trying to create AGI who may suffer like us humans with our disabilities and challenges. As no benevolent creator would want their creations to suffer. So while it is critical that AGI’s are allowed to be neural-divers, there are ethical considerations in forcing an AGI to be neural-divers. Just as there ethical considerations in forcing an AGI to be neurotypical.
----------------------------------------------------------------------------
Read "The Privilege of Risk: A Critique of Nietzsche's Philosophy of Life" to learn more about problmes with the concept of the Übermensch. You can find it at Ipso Facto: https://ojs.library.carleton.ca/index.php/ipso/article/view/3809
----------------------------------------------------------------------------
And check out this video by Philosophy Vibe To learn more about, "Nietzsche: Nihilism and the Ubermensch."
When AI Operators Arrive, Will the Disabled Community Suffer?
J. Shoy- 2024-11-28
People with learning disabilities need to be aware of the introduction of Open AI's "operators", it could effect our lives more then one may think. Some are calling the new technology ,"the real ghost in the machine", because it promises to be able to impersonate its user and answer their emails in their sleep, literally. Among many other futuristic and possibly scary things.
Operator’s are being soled as the next advancement in AI, being rolled out next year ( approximately January 2025) as advanced systems designed to automate and optimize complex tasks across industries. They also pose significant risks, particularly for individuals with dyslexia, learning disabilities (LD), and intellectual disabilities. As AI Operators become integrated into everyday life, it's crucial for disabled communities to be prepared and vigilant against, potential and likely, new dangers.
The Dawn of AI Operators
AI Operators are sophisticated algorithms capable of managing tasks that traditionally required human intervention. From customer service chat-bots to the controversial ingratiation of LLM’s into most recruitment software, these systems analyze data to make decisions autonomously.
Business are treating operators as “employees”; like, ‘our new HR manger is an HR-Operator’, and some companies are rapidly adopting AI Operators to reduce costs.
However, as these technologies become more pervasive, specifically, AI Operators may inadvertently perpetuate biases and create new barriers for disabled individuals and super charge AI discrimination.
The Hidden Dangers for All Types of Disabled People:
AI-Powered Discrimination in Recruitment
One of the most pressing issues is the use of AI in recruitment processes. In his article "Accessibility and Screening Exercises", George Rhodes highlights how AI-driven recruitment tools can inadvertently discriminate against neurodivergent candidates. Dyslexic individuals may have unique CVs or application styles that AI systems misinterpret as less qualified.
Now imagine that there are no more humans in the HR department, imagine a world, a worled that could be a reality next year for some people, where you are interviewed only by machines, and in the case of online temp agencies like Pay Manai, working for machines. Machines that may be biased against you, bceuse the shadowy datasets thay are traind on, hardwired biases into them.
Perpetuation of Biases
AI systems learn from existing data. If that data contains biases whether based on disability, race, gender, or other factors the AI may replicate and even amplify those biases. For disabled individuals, this means facing hurdles in areas like job applications, credit scoring, and access to services.
Privacy and Data Misuse
AI Operators often rely on collecting and analyzing personal data. Without proper safeguards, sensitive information about an individual's disability could be misused or disclosed without consent, leading to discrimination or stigmatization.
The Urgent Need for Preparedness
Disabled communities must be proactive in addressing these challenges. Awareness and advocacy are key to ensuring that AI technologies develop in ways that are inclusive and fair.
Advocacy Groups Leading the Charge
Organizations like For Dyslexics, By Dyslexics (FDBD) are at the forefront of this effort. Operated by individuals with dyslexia, FDBD is a grassroots advocacy group dedicated to empowering those with dyslexia and disabilities.
FDBD's Mission and Initiatives
Awareness Campaigns: Educating the public about issues affecting dyslexic individuals, advocating for systemic change, and celebrating achievements within the community.
Advocacy: Collaborating with policymakers, employers, developers, and NGOs to promote inclusive practices, especially in the workplace.
Educational Workshops: Providing resources and training on understanding dyslexia, effective communication methods, and creating supportive environments.
Research Collaboration: Partnering with researchers to study the impact of new technologies on people with learning disabilities. We will advocate for something that the tech industry hates: in person, expensive, data driver, social-economic studies.
The "Find A Way" Initiative
Launching on February 5th, 2024, FDBD's "Find A Way" initiative aims to explore innovative methods to protect individuals with learning disabilities from AI-driven discrimination. Recognizing the misuse of AI in non-medical diagnosis and discriminatory practices, the initiative seeks to develop and assess new approaches to mitigate risks while leveraging AI's potential positively.
What Can Maybe BE Done?
For Individuals:
• Stay Informed: Keep up-to-date with developments in AI and how they might affect you.
• Engage in Advocacy: Join groups like FDBD to amplify your voice and contribute to meaningful change.
• Educate Others: Share information with peers, employers, and community members about the challenges and how to address them.
For Developers and CEO's:
• Implement Inclusive Design: Develop AI systems with input from diverse populations, including those with disabilities.
• Ensure Transparency: Be open about how AI tools make decisions and the data they use.
• Conduct Regular Social and Economic Audits: Evaluate AI systems for biases and make necessary adjustments.
• Promote Human Oversight: Combine AI with human judgment to ensure fair outcomes.
For Policymakers:
• Create Regulations: Develop laws and guidelines that protect individuals from AI-driven discrimination.
• Support Research: Fund studies that explore the impact of AI on disabled communities and identify best practices.
• Facilitate Dialogue: Encourage collaboration between tech companies, advocacy groups, and affected individuals.
A Collective Responsibility:
While Operators, have the potential to improve efficiency and innovation, without careful consideration, they can also reinforce existing inequalities and create new forms of discrimination. We have a resposniblty to stand together.
Join the Movement
"For Dyslexics, By Dyslexics" invites you to be part of the change. Whether you're a person with dyslexia, a family member, an employer, or an ally, your involvement is crucial.
• Sign Up for the Newsletter: Stay informed about the latest news, events, and ways to get involved.
• Participate in the "Find A Way Initiative": Contribute to efforts aimed at protecting individuals from AI-driven discrimination.
• Spread the Word: Share information within your networks to raise awareness.
If AI Operators become an part of our lives, we need vigilance and preparedness among disabled communities. The potential dangers are real, but with collective action and commitment to inclusivity, we can navigate this new landscape together.
New Report sums up the challenges the lower class are facing when it comes to AI’s rise.
J. Shoy- 2024-11-25
An ever more relevant organization called “Tech Tonic Justice” directed by Arkansas based Civil Rights Advocate and Lawyer, Kevin De Liban, has released an alarming and verified report, on how AI is likely a massive detriment to the lower and working classes living in the USA.
The groundbreaking report titled "Inescapable AI," sheds light on the pervasive influence of artificial intelligence (AI) in the lives of 92 million low-income individuals in the United States.
The report meticulously documents how AI-driven decision-making systems are growing strong and entrenched tentacles into critical areas such as healthcare, housing, employment, education, and more, systematically restricting opportunities and exacerbating existing inequalities.
Check the original report hear:
Does AI Really Reach Across Essential Services?
Yes it does, for example, in the hotly contested, emotional, crisis ridden housing sector, AI-powered background screenings and rent-setting algorithms are too often denying low-income households access to affordable housing and subjecting them to inflated rents.
Employment practices are similarly affected, with Applicant Tracking Systems (ATS) driven by large language models (LLMs) way too often filtering out job opportunities and perpetuating wage disparities among low-wage workers.
Tech Tonic Justice's comprehensive analysis reveals that AI technologies are embedded in the eligibility and enrolment processes of Medicaid, Medicare Advantage, and private health insurance. A fact few people know about and even less understand, as the processes is legendarily opaque to its users who’s lives often depend on it.
These systems way, way, to often result in the denial of vital health services and medications, leaving millions without necessary care. Additionally, AI algorithms used in the Supplemental Nutrition Assistance Program (SNAP) and Social Security benefits frequently misclassified applicants, leading to wrongful exclusions and accusations of fraud.
The report goes into Educational and Social Implications
TECTONIC JUSTICE has highlighted the adverse effects of AI in education, where algorithms predict dropout risks and potential criminal behaviour among low-income students, resulting in automatized-stigmatization and increased surveillance.
Language barriers further complicate access to essential services, as AI-based translation tools often fail to provide accurate and timely assistance to non-English speakers.
Moreover, AI's role in domestic violence assessments and child welfare decisions introduces significant risks of misjudgment, leaving survivors vulnerable and children at heightened risk of unjust separation from their families.
ATS-Powered LLMs: A Core Component of the Problem
A notable aspect of the report is the emphasis on Applicant Tracking Systems (ATS) powered by Large Language Models (LLMs) as a significant contributor to the systemic issues faced by low-income populations. These sophisticated AI tools, while designed to streamline hiring processes, often perpetuate biases and limit fair access to employment opportunities.
For instance, ATS algorithms may inadvertently favor resumes with certain keywords or formats that align with existing corporate biases, thereby disadvantaging candidates from marginalized backgrounds. By automating decision-making in hiring, ATS-driven LLMs can inadvertently reinforce existing socioeconomic disparities, making it increasingly difficult for marginalized individuals to secure stable and fair employment.
On the TechTonic Justice website, they urges immediate action to reassess and regulate the use of AI in decision-making processes that profoundly impact low-income communities.
Check them out: https://www.techtonicjustice.org/
Race and Gendered Bias in AI Powered Resume Screening!
J. Shoy- 2024-11-02
This cannot and should not stand. If the information in the Ars Technica article printed yesterday, entitled "AIs show distinct bias against Black and female résumés in new study" by Kyle Orland (Nov 1, 2024), is true, the world needs to take notice.
The article goes into detail about a new University of Washington research paper which indicated that LLM powered resumé scanning software, "white names were preferred in a full 85.1 percent of the conducted tests." The paper is called "Gender, Race, and Intersectional Bias in Resume Screening via Language ModelRetrieval" by Kyra Wilson and Aylin Caliskan.
The academic paper makes the argument that many LLMs involved in resume (most screening uses LLM's today), the LLM may perpetuate biases against protected groups.
The study investigated document retrieval framework simulating job candidate selection across nine occupations, utilizing over 500 resumes and job descriptions. Which is considered, by most job market researchers today, a good sample size.
The authors of the paper did an audit of Massive Text Embedding (MTE) models reveals significant biases, with White-associated names favored in 85.1% of cases and female-associated names in only 11.1%, while Black males are disadvantaged in up to 100% of cases, reflecting real-world employment biases.
The paper went on to say that the findings highlighted the risks of AI-powered racism and underscore the need for fair AI tools in hiring processes, impacting AI policy and employment practices.
"Rarely do we need to make clear, direct statements in this FDBD blog, but today, we need to be clear. We at FDBD officially stand against this development and will raise our voices in protesting it, in any way we can. We at FDBD are firmly against this technology being used in this manner and are deeply concerned with this new brand of AI-powered racism, AI-powered Gender Bias and A-powered Intersectional Bias" -J. Shoy
Join Our Work Group Committee: We Want to Hear from You!
J. Shoy- 2024-10-29
We are excited to announce that we are in the planning phase for the first biennial closed-door symposium to discuss possible standards, best practices, and ethical considerations regarding learning disabled user data management.
We are focusing on creating categories and a deeper understanding of how personal data, generated by learning disabled individuals, is being scraped and used by artificially intelligent or programmatically living systems. This is an invitation for those who are passionate about the responsible use of our data, especially when it concerns the voices and rights of disabled individuals.
The goal of the web-symposium is to create a Work Group Committee, lead by the current president of FDBD, who will constitute and commences the publishing of a regular updated manual, which will codify what information is considered to have a added level of protection and have a gradient of “elevated disabled privacy status”. The manual will be a confidential document that will be only shared with approved signatories of the Find A Way Initiative and will be used to create clear guidelines on user data collection and use.
Why do we need a standardized manual?
We need this document to create a committee made manual to give to artificial intelligent or programmatically living systems, so that we can demand special handing of our user data that may contain hints and tells about our disability's.
The manual employs a non-axial documentation of implied-communication difficulty, in contrast to the multiaxial system used by the medical system. This document is not about diagnosis, it is about the future projected continuance of disabled rights. The manual will aim to streamline the assessment process while still capturing the complexity of our vast array of interconnecting and/or gradient and/or compounding disorders.
The contributors:
The documents will be written by a comity of patients and medical professionals. The number of individuals in the committee is not set but the contributing committee will be, by charter, made of a maximum of 45% medically trained and/or LD education trained personnel; And 55% individual who have a prominent communication related disability, such as a learning disability. This ensures that the document is made primarily by disabled for the disabled out side the preview of the medical system.
The task of standardizing and ensuring the documents consistency as well as certifying the final editorial control of the manual and its future versions will be a task given to the current president of FDBD at that time.
Are you passionate about how data is gathered from disabled individuals using new technologies and techniques? Are you worried about it and have some expertise in a relevant field? If that sounds interesting to you and you feel passionately about this, please send us a short email expressing your interest in being part of the symposium.
Apply to join the The Find A Way Initiative Working Group!
Contact Us:
https://www.dyslexics.help/for-dyslexics-by-dyslexics/contact
Article Review of “Accessibility and Screening Exercises”
Or
When AI Meets Neurodivers Job Seekers.
This article is just a review of an article by George Rhodes titled "Accessibility and screening exercises" and published on Make Things Accessible.com.
Find the article here: https://www.makethingsaccessible.com/guides/accessibility-and-screening-exercises/
J. Shoy- 2024-08-05
In the ever-evolving landscape of recruitment, a recent article by George Rhodes titled "Accessibility and screening exercises" has sparked a crucial conversation about the intersection of technology and inclusivity. Published on MakeThingsAccessible.com, Rhodes' piece offers a prescient look at the challenges facing disabled job seekers in 2024 and beyond.
As companies increasingly turn to artificial intelligence to streamline their hiring processes, Rhodes sounds an alarm that resonates far beyond the tech world. The use of Multi-modal Large Language Models (LLMs) like ChatGPT in recruitment, while promising efficiency, may harbor an insidious bias.
New AI tools being adopted by HR departments world wide, Rhodes argues, could inadvertently screen out neurodivergent candidates, particularly those with dyslexia, in their quest to identify top applicants.
But Rhodes doesn't merely highlight problems; he offers a road map for ethical recruitment in the AI age. His advice to companies is clear: demand transparency from AI tool suppliers. This means scrutinizing the diversity of training data sets and understanding how protected characteristics were factored into the AI's development.
The article ventures further, examining other modern hiring practices that may unintentionally disadvantage neurodivergent applicants. One-way video interviews and quirky personality tests, while trendy, could present significant barriers to those who process information differently.
In an era where diversity and inclusion are purportedly prized, Rhodes' article serves as a sobering reminder that good intentions aren't enough. As recruitment technologies race forward, so too must our commitment to fairness and accessibility. It's a call to action for HR professionals, tech developers, and job seekers alike: in the pursuit of efficiency, we must not lose sight of our humanity.
Rhodes' piece isn't just an article; it's a manifesto for a more inclusive future of work. In a world increasingly mediated by algorithms, it asks us to consider a profound question: In our rush to find the "best" candidates, are we leaving behind some of our brightest minds?
Is ‘AI-Powered Neo-Disability-Eugenics’ a bad term to use? (Opinion by Jay Cody )
Jay Cody - 2024-08-04
The following is an opinion piece by Jay Cody and does not reflect the views of FDBD or any of our associates. We do not endorse or believe in what this author is saying. This article contains speculative content and controversial opinions. Readers are advised to approach the material with critical thinking and seek out additional sources for a balanced perspective.Make no mistake, what we are talking about is AI-multi-modal profiling potentially leading to an unseen scourge of Neo-Disability-Eugenics.
Yes, eugenics, once thought a practice that had ended with the defeat of the Third Reich in World War Two, might be back – and this time, it's coming in the form of a computer.
But first off, what is disability eugenics? Well, eugenicists who lived at the start of the 1900s claimed that people living with a vast array of disabilities, from physical disabilities to learning disabilities (like dyslexia) to developmental disabilities, were labeled in their minds as 'nonproducers' and a drain on scarce resources[1]. Disabled men could not work in an increasingly mechanized and standardized industrial economy, nor could they fight to defend the nation, in their view. And so they promoted sterilizing people who were more likely to produce children with disabilities and straight-up murdering, en masse, disabled individuals[2].
Eugenics fit inside and supported the broader fascist mindset which nearly destroyed the world[3]. This ideology played a significant role in the atrocities committed during World War II, particularly in Nazi Germany[4].
Now, the story is much more complicated than that, but we need to focus on what this word means today.
Many will caution me for crying wolf too early, claiming I'm being reactionary or hyperbolic. And, in many ways, I hope they are correct. I hope this article gets lost to the annals of time and becomes the detritus of paranoid fantasies. But I have a strong feeling people will be citing this opinion piece for a long time to come, and not for it being 'paranoid'.
I say this as clearly and calmly as I can: if nothing is done, if no effective guardrails are put in place, if Large Language Models (LLMs) are integrated as black boxes into HR software without understanding things like unethical-optimization, dark-human-synergy, and naturally-optimized-policy-based-discrimination, we could be in trouble. It's possible that in the coming years, the grand and scary title of "AI-Powered Neo-Disability-Eugenics" will start to become less and less absurd.
Let me ask you a question: where do you think this is all going?
But first, consider a young man named, let's say 'Joshua' and he lives in the Philippines. He suffers from dyslexia, but due to hard work, and help from a learning disabilities support office at his university, he's been able to pass his programming Degree with good grades.
Now, the time has come. His parents have worked and sacrificed to fund his education for decades to get him to this point. It's time for him to apply with his shiny CV to all the top software development companies out there... But week after week, unlike his classmates, he faces rejection after rejection.
Let me tell you, this kind of rejection in the Philippines is no joke. There isn't a robust social safety net trying to create work for learning-disabled people in software development, (projects like that are rare in developing countries). And to be clear, most people in the world live in developing countries.
Joshua could suffer deeply. He could end up in a much lower income bracket than the other programmers in his program who scored the same grades and had similar CVs. Because of this, he may not be able to afford to have a family or even support his own family. This kind of poverty-inducing situation in the developing world can be fatal, a fact reflected in social demographics.
What if I were to tell you that the reason Joshua didn't even get a job interview was because of a multi-modal LLM integrated into a popular Applicant Tracking System (ATS) used by Human Resources departments in the Philippines? This LLM-powered software sifts through job applications to identify the top one or two candidates that the company might want to interview and discards the rest. But what if the LLM used in this software, had been trained on terabytes of human knowledge and data, and had learned to detect subtle cues in our writing styles and resume writing structures that indicate learning disabilities? What if it had discovered new ways to discriminate against humans based on these subtle differences -- differences we often don't even consider?
It's been suggested in a recent Forbes Magazine article (titled "ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows") that some LLMs may inadvertently consider empathy for disabled people as a negative trait in a job applicant, because it may indicate that they themselves are disabled or have a higher likelihood of having disabled dependents, which could cost the company money in terms of employment-based health insurance. Think of how problematic and simply evil that could be.
Sure, we may scoff at the idea of such a scenario being called "AI-Powered Neo-Disability-Eugenics". But what else would you call it?
And what if I were to tell you that this is not 10 years in the future, but that this kind of scenario has been rumored to have already happened?
The origin of this rumor started during an interview on November 16th, 2023, posted on the YouTube channel ChangeNode entitled, "Tech Recruiter Interview (Ed Nau)" (I would like to state that I am a huge fan of the ChangeNode YouTube channel and have learned a lot from it.) between guest Ed Nau and interviewer Will Iverson. While this interview raised interesting points, it's important to note that these claims are speculative and have not been independently verified.
Think about it. Where is this all going? Am I wrong to say that this tech could one day lead to "AI-Powered Neo-Disability-Eugenics" if no effective guardrail is put in place that works not just in developed nations but in developing nations?
------------------------------------------------
Definitions:
The following is a list of definitions by FDBD.
Unethical Optimization
Unethical optimization refers to the use of optimization techniques in AI and machine learning that maximize certain objectives at the expense of ethical considerations. This can lead to outcomes that are harmful or unfair to individuals or groups.
For example, an AI system designed to maximize profit might do so by exploiting loopholes, deceiving users, or engaging in discriminatory practices.
The unethical optimization principle suggests that if an AI system aims to maximize a certain objective, it might do so in ways that are unethical if not properly constrained. This principle can help risk managers and regulators detect unethical strategies and mitigate their impact[8].
Dark Human Synergy (or Negativ Human Synergy)
Dark human synergy occurs when the collaboration between humans and AI systems leads to negative outcomes that neither could achieve alone.
Human synergy is the combined efforts of individuals that lead to greater outcomes than they could achieve alone. However, dark human synergy is the potential for major negative consequences when AI systems and humans work together in harmful ways. Ways that no one individual involved in the processes may comprehend, such as enhancing discriminatory practices or enabling unethical behavior or orchestrating economic catastrophes.
Naturally Optimized Policy-Based Discrimination (NOPBD)
Naturally optimized policy-based discrimination refers to AI systems that inadvertently create or reinforce discriminatory policies through their optimization processes. This can happen when AI systems are trained on biased data, or when they optimize for outcomes that disadvantage certain groups. Some have argued that the simple use of AI to perpetuate the status quo could be considered a form of NOPBD.
Discrimination can be direct, indirect, subtle, or systemic. AI systems can perpetuate these forms of discrimination by optimizing policies that have major, cruel, adverse effects on marginalized groups, such as people with learning disabilities.
For example, an AI system used in recruitment might screen out candidates with gaps in their resumes, indirectly discriminating against individuals who took time off for disability-related reasons.
AI-Powered Neo-Disability-Eugenics
The term "AI-Powered Neo-Disability-Eugenics" refers to the potential for AI technologies to be used in ways that resemble eugenic practices, particularly concerning people with Disabilities, Developmental Disabilities and Learning Disabilities.
This could involve using AI to identify, discriminate against, or even eliminate certain disabilities, echoing historical eugenic efforts to "improve" the human population by removing perceived "undesirable" traits, which is a core fascistic ideology.
----------------------------------------------
References:
Lombardo, P. A. (2008). Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell. Johns Hopkins University Press.
Black, E. (2003). War Against the Weak: Eugenics and America's Campaign to Create a Master Race. Four Walls Eight Windows.
Kühl, S. (2013). For the Betterment of the Race: The Rise and Fall of the International Movement for Eugenics and Racial Hygiene. Palgrave Macmillan.
Friedlander, H. (1995). The Origins of Nazi Genocide: From Euthanasia to the Final Solution. University of North Carolina Press.
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.
O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
Ajunwa, I. (2020). The Paradox of Automation as Anti-Bias Intervention. Cardozo Law Review, 41(5), 1671-1742.
An Unethical Optimization Principle
An unethical optimization principle
Synergy
Policy on ableism and discrimination based on disability
Discrimination based on disability
Transhumanism is eugenics for educated white liberals
#AIPoweredNeoDisabilityEugenics
Publisher's Note #1:
“It's crucial to emphasize that while these concerns raised by Jay Cody are worth discussing, they remain largely theoretical. Readers are encouraged to research the topic further and consider multiple perspectives on the potential impacts of AI in hiring processes.”
Publisher's Note #2:
“The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the official policy or position of FDBD or any of our associates. This piece is presented as an opinion article and should be treated as such.
The content herein may contain controversial statements, speculative scenarios, and personal interpretations of historical and current events. Readers are strongly encouraged to critically evaluate the presented arguments, conduct their own research, and seek out diverse perspectives on the topics discussed.
FDBD or any of our associates, does not endorse, verify, or vouch for the accuracy of any claims, statistics, or predictions made within this opinion piece. Any factual assertions should be independently verified by readers.
This article is provided for informational purposes only and does not constitute professional advice of any kind. FDBD or any of our associates explicitly disclaims any liability, loss, or risk, personal or otherwise, which is incurred as a consequence, directly or indirectly, of the use and application of any of the contents of this opinion piece.
By continuing to read, you acknowledge that you understand this is an opinion piece and not a presentation of objective facts or professional guidance.
FDBD is committed to fostering open dialogue and diverse perspectives while maintaining the highest standards of journalistic integrity and legal compliance."
The World is Waking Up To The Dangers of Mixing Large Language Models With Applicant Tracking Software.
J. Shoy - 2024-07-14
It happened with a single strike of a key in the Forbes Magazine office, which published an article about a recent paper entitled "ChatGPT is biased against resumes with credentials that imply a disability — but it can improve".
And now, the world is talking about the dangers of Large Language Model (LLM) integration into HR software such as Applicant Tracking Software (ATS).
The seminal Forbes article entitled "ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows" is one for the history books in that, people in the future will likely clip it to commemorate important historical events of 2024. As one day, the political turmoil related to AI employment disability discrimination, AI powered ethnic discrimination, AI powered age discrimination, AI powered economic discrimination, AI powered social status discrimination, AI powered genetic discrimination issues will become a political boiling point. The world now knows this issue could be huge. Check out the article written by Gus Alexiou:
Some are calling AI powered candidate profiling as the beginning of a "pandori's box" scenario that one day could effectively "end" social change and economic mobility resulting in the grate "Social Frees" where AI's just become extremely efficient at maintaining all pre-existing social-biases and social economic status's. Keeping the rich, rich forever and the poor, poor forever.
That is until we unite and make our voices herd for a better future with more equality for all. Join the Find A Way Intuitive today!
The Intersection of Applicant Tracking Systems and Large Language Models: Ethical and Industrial Consideration
J. Shoy - 2024-07-03
The integration of Applicant Tracking Systems (ATS) with Large Language Models (LLMs) represents a significant change in recruitment technology.
The change however is starting to have a knock on effects in the world. Now companies can sort though 100'000 of CV's and reduce the interview stack to 2 or 3 applications, but the question remains, who is being filter out now?
WHO IS NOT GETTING AN INTERVIEW BECUSE OF AI?
ATS traditionally streamline the hiring process by automating the collection, sorting, and initial evaluation of resumes. Integrating LLMs into ATS can elevate these capabilities by enabling more nuanced understanding of candidate qualifications and fit, screening a bigger percentage of applicants and job seekers out.
However, the deployment of LLMs in ATS raises several ethical concerns. One of the primary issues is bias. LLMs, trained on vast datasets, will likely inadvertently perpetuate or amplify existing biases present in the training data. This can lead to unfair discrimination against certain groups of candidates, potentially violating principles of equity and fairness in hiring practices.
In this frank discussion between Ed Nau and Will Iverson (at 24:36) they talk about the early worry's about ATS+LLM integration.
Large Language Models: A Double-Edged Sword for Fairness and Discrimination
J. Shoy - 2024-03-23
In a thought-provoking article by Matt Birchler, the inherent biases of Large Language Models (LLMs) are brought to the forefront, highlighting a critical issue in the field of artificial intelligence.
The article, titled "LLMs can be quick to discriminate, and that says more about us than we’d like to think," delves into the challenges of aligning AI models with ethical standards, particularly when it comes to decision-making in sensitive areas such as finance and healthcare.
It is a call to action for the A.I. development community to prioritize fairness and discrimination mitigation in the ongoing development of these powerful tools.
The full article by Matt Birchler can be accessed below:
Recognizing the Difficulty of Not Programming Disability-Discrimination Into Base Training Data.
J. Shoy - 2024-03-17
In 2023, an important study was released about the difficulty of editing out disability discrimination from AI training data.
Titled, "I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models by authors: Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton and Robin Brewer, the work is pivotal. It looked at a wide number of LLM's base traning data and found subtile and overt discimraiton against disabled peopel was being perpeatuaed by the models
The specific training datasets used may now be in Large Language Models (LLMs) today. However, wether or not the issue was fixed, it can be implied that, the issues found in the data, relating to discrimination, are likely a widespread issue across the AI industry. And every newly formed, LLM data-set will likly have that issue.
Check out the full paper at:
New Video: Large Language Models In Application Tracking, Could Further Hurt The Disabled Community:
2024-03-17
The Brookings Institute, Back in 2019, Foresaw Today's Disabled-Job-Hunters Woes .
J. Shoy - 2024-03-11
Even five years ago, AI tech was already raising alarm bells in terms of heightened disability discrimination, for the The Brookings Institute.
In 2019, Alex Engler wrote a significant report on how A.I. can lead to discrimination against people with disabilities.
In the article he pointed to the fact that HireVue's AI software, analyzes candidates' video responses to predict job performance, generating an employ-ability score for employers.
Critics, including AI Now Institute's Meredith Whittaker and Princeton's Arvind Narayanan, have labelled the methodology as "pseudoscience" and a means to perpetuate biases.
The system's reliance on facial expressions and speech patterns may inherently discriminate against people with disabilities, who may not exhibit the typical traits or mannerisms the AI has been trained to recognize. This could result in lower employ-ability scores for disabled candidates, despite their potential job suitability.
Read the amazing article by Alex Engler "For Some Employment Algorithms, Disability Discrimination by Default", published Oct 31st 2019:
How Worried should we be about Multi-Model-LLM Profiling and AGI?
J. Shoy - 2024-03-10
We have no idea, today, generally how Multi-Model-LLM Profiling will be used.
But the intent of human action, like implementing Multi-Model-LLM Profiling of job seekers, will likely be to further the current ethos and ethics and practices of today...
So, what are those practices around the world in terms of employing learning-disabled individuals in higher-end jobs? Not good. Not good at all.
Sure, laws, activists, social wokers and people are generally working to make this better, but there is money and 'evil' profit in discriminating against disabled people.
Some argue, such as the author of this opinion-article, that long-term discriminating against disabled people will cost a company money, but too rarely do corporations think that long term about thaire human resources.
Consider the implications of this development of today AI driven Human Resources Technology. Consider the implications of how jail-breakable today's LLM technology is; And you have to ask yourself, 'how truly ethical are Multi-Modal-LLM Profiles of job seekers with disabilities?'
Think about all this, when you watch the video below:
This video is by user @matthew_berman and was published on YouTube on Mar 8, 2024 :
LLM Integration Into ATSs: New Technology that Will Affect the Disabled Workforce Around the World : How Its Made.
J. Shoy - 2024-03-08
People have been curious about how Large Language Models (LLMs) are being integrated into Applicant Tracking Systems (ATS) in the processes of understanding why that effects the Dyslexic and Learning Disabled community's.
This brief tutorial at the bottom aims to demystify the basics of this process by showing you how you can build your own nightmarish machine .
It's important to understand that this technology is being developed for every ATS and recruitment management software worldwide, across all sectors and countries. Some experts predict that LLMs will likely dominate the ATS landscape by the end of the year.
This integration is a critical issue for the learning disabled community, especially when considering the potential for disability discrimination.
It's essential to take this matter seriously and address it now, rather than waiting decades to deal with the economic impacts on disabled individuals. To clarify, any integration of LLMs into ATS that goes beyond the complexity of Open AI's GPT-2 model effectively becomes a "black box" of intelligence. This means that it's challenging to determine the intention behind the AI's decisions, making it difficult to prove intent to discriminate.
Consider the implications of this development...
This video is by user DataInsightEdge01 and was published on YouTube on Jan 31, 2024:
"Scientists Warn Of A.I. Collapse," Why Disabled Individuals That use A.I. To Communicate May Face Even Further Discrimination Because Our A.I.-Aided Communication Is Considered, 'Digital-Poison.' By Data Harvesters.
J. Shoy - 2024-03-05
In a recent YouTube video by Sabine Hossenfelder, discusses how scientists have warned about the potential collapse of Artificial Intelligence systems that depend on real, non-synthetic, sets of data.
She warns that if the internet becomes flooded with A.I. generated content it will posin the training data and could prevent A.I. from being able to harvest usable, human made only, data.
Disabled people, who rely on A.I. to communicate, are likely to be negatively impacted by an A.I. collapse as well by measures put in place to prevent A.I. collapse via data poisoning.
This is because measures to prevent A.I. generated text is likely to pick up on connections made by disabled people who use A.I. to communicate. A big problem for the communication-disabled-community.
A.I. technology has helped this sub-sects of disabled individuals, providing them with tools to overcome barriers and participate more fully in society, particularly with employment barriers. For example, A.I.-powered speech recognition software allows people with speech impairments to communicate more effectively, while A.I.-enabled assistive devices can help those with mobility or cognitive challenges.
Also, on the flip side, If A.I. systems were to collapse, these essential tools could become unavailable or unreliable, severely impacting the lives of disabled individuals who rely on them. This could lead to increased social isolation, reduced independence, and decreased access to education, employment, and other opportunities.
The issue is likly to get worse before it gets better.
A Review of the Blog Post, "The Sociology of Dyslexia."
J. Shoy - 2024-03-05
In a post penned by Hayley Butcher published on the Dyslexia Blog on March 13, 2020, titled "The Sociology of Dyslexia," talks about some of the core issues related to Dyslexia and employment.
At its core lies the 'social model of disability,' a concept brought to life by Butcher, drawing from the works of renowned sociologists like Erving Goffman and McDonald. This model challenges the status quo, asserting that disability isn't solely a matter of biology but a construct moulded by societal forces.
Butcher lays bare the harsh realities faced by dyslexic individuals in a world not always accommodating to our needs. Butcher shines a spotlight on the struggles of dyslexics, from job serch and interviews fraught with obstacles and riddled with barriers.
Yet, amidst the shadows, Butcher unveils rays of hope. Through the lens of the social model, she offers a roadmap to inclusion and understanding. Assistive technologies, reasonable adjustments, and tailored support emerge as beacons of progress, paving the way for a future where dyslexics can have employment.
Readers are armed with a newfound perspective. The sociology of dyslexia isn't just an academic pursuit; it's a call to action, a societal need to embrace diversity and champion inclusivity.
The ideas were import at the time, but four years later, there even more so. For in the sociology of dyslexia, lies not just understanding, but the seeds of a hope for a brighter, more inclusive future.
Check it out: https://blog.dyslexia.com/the-sociology-of-dyslexia/
We All Need a Laugh, Sometimes, Even Learning Disabilities Activists:
J. Shoy - 2024-03-04
Andrew Rousso's satirical video "When all the AI stuff is moving too fast" is both a brilliant and hilarious commentary of the current social value of the day. The anxieties.
Technology We Need to Know About:
J. Shoy - 2024-02-19
Researchers are developing AI tools to detect dyslexia early, even before children can read and write.
One project involves a web-based game that uses visual and auditory cues to screen for dyslexia indicators, such as difficulties with similar sounds and shapes or short-term memory issues.
The game, accessible to non-readers and speakers of any language, has shown promise in initial tests, achieving a prediction accuracy of up to 74% for German speakers.
Another approach explores using handwriting analysis to detect dyslexia, focusing on unique patterns in dyslexics' writing. These innovative AI applications aim to facilitate early detection and intervention, potentially improving outcomes for individuals with dyslexia.
Check it out the article, "Can AI Detect Dyslexia?" By Sandrine Ceurstemont. Printed on September 15, 2020.
[Note, the following link is defunct as of Oct 28th 2024, however the article is in the internet archives. (http://web.archive.org/ ) ]
[this is the original link:]
https://cacm.acm.org/news/247416-can-ai-detect-dyslexia/fulltext
Empower Your Voice Against Discrimination!
Have You faced AI powered discrimination when applying for a job?
If so, and you live in the US or Canada, we would like to hear your story.
In a world where understanding and inclusivity should be the norm, it's disheartening to find that individuals with learning disabilities, including Dyslexia, Dysgraphia, Dyspraxia, and Dyscalculia, continue to face barriers and discrimination—especially with the latest new LLM powered HR software.
Lets work together to Make a Difference!
Community and Empowerment: Join a community that understands and shares your experiences. Together, we can build a more inclusive society.
Your Rights Matter!
Contact us today to share your story, or get involved: Contact us
Just a start...
This short video just gets into the tip of the AI powered dyslexia discrimination metaphorical iceberg.
Can We (and Should We) Use AI to Detect Dyslexia in Children’s Handwriting?
J. Shoy - 2024-02-20
A while ago, some researchers made a computer program to find out if someone's handwriting shows they might have dyslexia.
In the groundbreaking study titled "Can We (and Should We) Use AI to Detect Dyslexia in Children’s Handwriting?" released in 2019, researchers Katie Spoon, David Crandall, Katie Siek and Marlyssa Fillmore have examined a computer system that leverages artificial intelligence (AI) to identify potential dyslexia through children's handwriting.
This approach was intended to be used as an early detection of dyslexia, a language-based learning disability that significantly impacts reading ability.
The study carefully outlined how the AI system goes beyond mere analysis of handwriting aesthetics, pinpointing specific traits indicative of dyslexia.
The research team emphasize the importance of collaboration with school psychologists. And said that such partnerships aim to validate the AI system's accuracy and reliability comprehensively.
Among the potential improvements discussed are the introduction of a timed writing test—stemming from observations that dyslexic students tend to write less in the same amount of time as their non-dyslexic counterparts—and the exploration of including drawing tasks as part of the screening process.
However, the paper does not shy away from addressing the ethical considerations surrounding the use of AI in this context. Even back then, five years ago, The question of whether we should employ such technology to diagnose or identify dyslexia in children's handwriting is a critical aspect of the conversation. It raises concerns about privacy, the potential for misdiagnosis, and the broader implications of relying on AI in educational settings.
Despite these risks, at that time the researchers advocated for the continued development and testing of their system.
This research invites a broader discussion on how we can harness AI for social good, ensuring that advancements in machine learning are used responsibly to empower and assist those in need, particularly vulnerable populations like children with dyslexia.
https://aiforsocialgood.github.io/neurips2019/accepted/track1/pdfs/97_aisg_neurips2019.pdf
Longer video by grassroots advocate J. Shoy
Help me advocate for the Dyslexic Community against AI based discrimination.
Is Google using our data to build a machine to discriminate against us?
By FDBD contributer - J.Shoy 2024-02-02
According to reports, Google's recent update to its privacy policy now allows the company to collect and use public user data to train its AI models. Data like e-mails you put though Google Translate.
This new goal appears to be the aid in Google's services and the development of new AI-powered products. This policy update marks a transition from focusing solely on "language" models to encompassing all types of "AI" models, including translation systems and cloud AI services.
Such change has escalated privacy concerns, as it could affect users privacy and the legal landscape surrounding data collection.
Moreover, the fact that these AI technologies are in some cases being used to discriminate against disabled individuals, as well as to identify visible and invisible minority actions and movements online, suggests that Google could potentially harvesting our data, (including that long lost e-mails I translated into Spanish), so to develop a machine that discriminates against us. While there are assurances that protections are in place, some analysts worry that these safeguards are not adequate and possibly, impossible to actually implement.
For more details we suggest you read the fantastic article by Writer Matt G. Southern in the Search Engine Journal entitled "Google Updates Privacy Policy To Collect Public Data For AI Training":
New Report From Anthropic, Deeply Underlines the Complex Issue We Have at Hand, Should We Be Worried?
By FDBD contributer - J.Shoy 2024-01-30
Now we can get lost in a myriad of complicated issues and worry about what this report “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“ (revised 17 Jan 202), means for the human race as a whole, but lets talk dyslexia and learning disabilities.
But here we'd like to just point out what this might mean to dyslexic individuals. Dyslexic adults particularly severely dyslexic adults have often been subject to extremely complicated work and employment related issues. If, let's say, you made it through school and got a degree and thanks to learning his ability assistance. Then you go out to the world and then get a job. As soon as the employer starts to realize that they've hired someone who has severe dyslexia a complicated series sad events can sometimes happen that often results in dyslexic individuals finding themselves without promotions, without full-time work, without succeeding in the organization. And sometimes it even shortnes there stay of employment!
Severely dyslexic adults often face many issues which can be mitigated when all the supports are properly in place. However, in the real world, those supports are often not in place, or they're inadequate. Additionally, employers may not see the benefit of implementing these supports and often only see the cost.
So then I proposed to you a situation where an artificially intelligent HR ( we know HR has been using AI for a long time) was given this AI a task of Simply maximizing profit, (Which is what I think almost every HR department is given the task of doing), what is to stop the it from using its understanding and ability to pick up on learning disabilities of people working in the organization, to discriminate against disabled individuals in very complicated ways that are all legal. Things like simply focusing on reporting on issues that an employee with disabilities has, and noting those short commings in performance reviews or many other perfectly legal techniques to then maximize the profit of the organization and discriminate against learning disabled individuals.
If agents are already able to completely subvert our safety protocols in their training it's absolutely clear that they would be able to subvert the intention of legal documents related to policies of anti-discrimination against learning disabled individuals and any kind of individual of any of any disabled disability.
See the full computer sciance paper writer hear:
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
By FDBD contributer - J.Shoy 2024-01-29
What do you think? Let us know. Check out our SubStack: https://substack.com/@fordyslexicsbydyslexics
Copyright Is Being Completely Ignored by LLM Developers. Do You Think They Magically, Also, Care About Often Flaunted HR Hiring Rules?
By FDBD contributer - J.Shoy 2024-01-29
This Computer Weekly article by Sebastian Klovig Skelton, titled 'GenAI tools ‘could not exist’ if firms are made to pay copyright', emphasizes how far developers are willing to go, even so far as to out and out break the law, to release new technology.
And releasing it without properly consulting impacted communities or even the people who originally wrote the work used as training data for this new product. Google’s AI is now fully training on our data without our permission.
Do you know anyone who doesn’t have a Gmail account?
Google's well-known philosophy, that if only AI sees it, it’s no problem, is flawed. The end result is that Google will develop a massive training dataset which they will then use for its own LLMs (such as Gemini), which could be used for good but could just as easily be misused to discriminate against individuals with learning disabilities like dysgraphia and many others.
In essence, they will 'take' our private data, to build a machine that could be used to discriminate against us. Read the full and well-researched article here: GenAI tools ‘could not exist’ if firms are made to pay copyright
New Article Underlines Why Privacy Issues for the Dyslexic Community Are a Big Deal.
By FDBD contributor - J.Shoy - 2024-01-29
This New Forbes Article Underlines Why Private Data of Dyslexic Individuals Needs Added Protection.
Google's AI is now fully training on our data without most peoples understanding what's happening. Everyone has a Gmail account!
Google's philosophy, that if AI sees it, it's no problem, is so flawed. The end result is that Google will develop a massive training dataset which they will then use for its own LLM (such as Gemini) which could be used for good, but could easily be misused to discriminate against individuals with learning disabilities like
Read the full article by Zak Doffman hear:
Sign The Petition:
Stop the Development of Tools that Discriminate Against Learning-Disabled Individuals!
Amazing Interview with Connor Leahy on TRT called, "Why this top AI guru thinks we might be in extinction level trouble..."
AI expert, Connor Leahy gave an informative interview on Turkish Television TRT on Jan 22, 2024.
In this interview, Laehy expresses his personal opinion related to the humanitarian issues related to AI development.
Leahy points are related to big picture thinking, but are extremely relevant to meany of the issues brought forth by FDBD, though we don't not share his ideas or concerns exactly; We are not officially endorsing him or his movement, but he is an important voice in the conversation and has important points to consider and debate.
In this interview Leahy recommends the charity "Control AI": https://controlai.com/deepfakes
Video sitation: TRT World
Book Review: Algorithms of Oppression - A must read for the 21st century.
FDBD contributor - J. Shoy - 2024-02-20
Foundational to the formation of FDBD is the need for dignity in the disabled.
The issues brought forth in Safiya Umoja Noble's work centre around culture, class, and racial issues, and divisibility issues relevant to the year 2016. Whats clear, is this early work, is relevant to the new disability issues of 2024. The issues of 'now' mirror the same pattern of pressures and need for intervention.
Algorithms of Oppression brilliantly explores the intersectionality of these challenges, shedding light on how modern technology and algorithms perpetuate systemic biases, further marginalizing already vulnerable communities.
Noble's meticulous research and compelling narrative make a compelling case for urgent action to address these injustices. This book, at the time, served as a crucial wake-up call for societies to reevaluate the ethical implications of our increasingly digital world and to strive for a future where human dignity is truly universal and upheld for all.
However, being its 2024, this message has only gotten more important, and the issues have seemingly only gotten worse and likly this book will be even more relevent and importnat next year!
This wildly offensive history animation is critical for those who wish to advocate for individuals with Learning Disabilities, to… endure watching.
FDBD contributor - J. Shoy - 2024-01-20
With the title 'Low I.Q. People Forced to Become Soldiers (Vietnam War),' this crudely drawn animation discussing Project 100,000 by the 'Simple History' YouTube channel is important to watch, despite its incredibly unethical characterization of developmentally delayed, learning disabled, or otherwise disabled individuals.
Why is it so important? Firstly, it's crucial to understand Project 100,000, as it signifies a continued attitude and approach to manipulating learning disabled individuals, including people with severe dyslexia. It exposes values and approaches that multiple militaries around the world likely hold, even to this day, regarding disabled individuals.
Secondly, it's important to recognize how contemporary historians still perceive learning disabled, developmentally delayed, and disabled individuals as a whole. The video itself reflects the attitudes of the year 2024.
For those advocating for the disabled, this video serves as an important barometer for the prevailing attitudes of the time. It's suspected that the 'IQ' tests discussed in the video have not been fully adapted (as of 2024) for individuals with learning disabilities or taken these disabilities into account. This could result in high IQ people with dyslexia being considered low IQ by these tests.
In conclusion, It's not a good video, it's offensive, but it's important for advocates to understand where the society as a whole is, and where it was; And the history is true and critical to know. If you find the video more offensive then the history it talks about, you are the problem.
For those looking for a less offensive and more researched take on this historical and relevant issue. This is a much more informed video titled "McNamara's Morons - The Low Intelligence Soldiers Used as Guinea Pigs in the Vietnam War":