Blog
Headlines:
Article Review of “Accessibility and Screening Exercises”
Or
When AI Meets Neurodivers Job Seekers.
This article is just a review of an article by George Rhodes titled "Accessibility and screening exercises" and published on Make Things Accessible.com.
Find the article here: https://www.makethingsaccessible.com/guides/accessibility-and-screening-exercises/
J. Shoy- 2024-08-05
In the ever-evolving landscape of recruitment, a recent article by George Rhodes titled "Accessibility and screening exercises" has sparked a crucial conversation about the intersection of technology and inclusivity. Published on MakeThingsAccessible.com, Rhodes' piece offers a prescient look at the challenges facing disabled job seekers in 2024 and beyond.
As companies increasingly turn to artificial intelligence to streamline their hiring processes, Rhodes sounds an alarm that resonates far beyond the tech world. The use of Multi-modal Large Language Models (LLMs) like ChatGPT in recruitment, while promising efficiency, may harbor an insidious bias.
New AI tools being adopted by HR departments world wide, Rhodes argues, could inadvertently screen out neurodivergent candidates, particularly those with dyslexia, in their quest to identify top applicants.
But Rhodes doesn't merely highlight problems; he offers a road map for ethical recruitment in the AI age. His advice to companies is clear: demand transparency from AI tool suppliers. This means scrutinizing the diversity of training data sets and understanding how protected characteristics were factored into the AI's development.
The article ventures further, examining other modern hiring practices that may unintentionally disadvantage neurodivergent applicants. One-way video interviews and quirky personality tests, while trendy, could present significant barriers to those who process information differently.
In an era where diversity and inclusion are purportedly prized, Rhodes' article serves as a sobering reminder that good intentions aren't enough. As recruitment technologies race forward, so too must our commitment to fairness and accessibility. It's a call to action for HR professionals, tech developers, and job seekers alike: in the pursuit of efficiency, we must not lose sight of our humanity.
Rhodes' piece isn't just an article; it's a manifesto for a more inclusive future of work. In a world increasingly mediated by algorithms, it asks us to consider a profound question: In our rush to find the "best" candidates, are we leaving behind some of our brightest minds?
Is ‘AI-Powered Neo-Disability-Eugenics’ a bad term to use? (Opinion by Jay Cody )
Jay Cody - 2024-08-04
The following is an opinion piece by Jay Cody and does not reflect the views of FDBD or any of our associates. We do not endorse or believe in what this author is saying. This article contains speculative content and controversial opinions. Readers are advised to approach the material with critical thinking and seek out additional sources for a balanced perspective.Make no mistake, what we are talking about is AI-multi-modal profiling potentially leading to an unseen scourge of Neo-Disability-Eugenics.
Yes, eugenics, once thought a practice that had ended with the defeat of the Third Reich in World War Two, might be back – and this time, it's coming in the form of a computer.
But first off, what is disability eugenics? Well, eugenicists who lived at the start of the 1900s claimed that people living with a vast array of disabilities, from physical disabilities to learning disabilities (like dyslexia) to developmental disabilities, were labeled in their minds as 'nonproducers' and a drain on scarce resources[1]. Disabled men could not work in an increasingly mechanized and standardized industrial economy, nor could they fight to defend the nation, in their view. And so they promoted sterilizing people who were more likely to produce children with disabilities and straight-up murdering, en masse, disabled individuals[2].
Eugenics fit inside and supported the broader fascist mindset which nearly destroyed the world[3]. This ideology played a significant role in the atrocities committed during World War II, particularly in Nazi Germany[4].
Now, the story is much more complicated than that, but we need to focus on what this word means today.
Many will caution me for crying wolf too early, claiming I'm being reactionary or hyperbolic. And, in many ways, I hope they are correct. I hope this article gets lost to the annals of time and becomes the detritus of paranoid fantasies. But I have a strong feeling people will be citing this opinion piece for a long time to come, and not for it being 'paranoid'.
I say this as clearly and calmly as I can: if nothing is done, if no effective guardrails are put in place, if Large Language Models (LLMs) are integrated as black boxes into HR software without understanding things like unethical-optimization, dark-human-synergy, and naturally-optimized-policy-based-discrimination, we could be in trouble. It's possible that in the coming years, the grand and scary title of "AI-Powered Neo-Disability-Eugenics" will start to become less and less absurd.
Let me ask you a question: where do you think this is all going?
But first, consider a young man named, let's say 'Joshua' and he lives in the Philippines. He suffers from dyslexia, but due to hard work, and help from a learning disabilities support office at his university, he's been able to pass his programming Degree with good grades.
Now, the time has come. His parents have worked and sacrificed to fund his education for decades to get him to this point. It's time for him to apply with his shiny CV to all the top software development companies out there... But week after week, unlike his classmates, he faces rejection after rejection.
Let me tell you, this kind of rejection in the Philippines is no joke. There isn't a robust social safety net trying to create work for learning-disabled people in software development, (projects like that are rare in developing countries). And to be clear, most people in the world live in developing countries.
Joshua could suffer deeply. He could end up in a much lower income bracket than the other programmers in his program who scored the same grades and had similar CVs. Because of this, he may not be able to afford to have a family or even support his own family. This kind of poverty-inducing situation in the developing world can be fatal, a fact reflected in social demographics.
What if I were to tell you that the reason Joshua didn't even get a job interview was because of a multi-modal LLM integrated into a popular Applicant Tracking System (ATS) used by Human Resources departments in the Philippines? This LLM-powered software sifts through job applications to identify the top one or two candidates that the company might want to interview and discards the rest. But what if the LLM used in this software, had been trained on terabytes of human knowledge and data, and had learned to detect subtle cues in our writing styles and resume writing structures that indicate learning disabilities? What if it had discovered new ways to discriminate against humans based on these subtle differences -- differences we often don't even consider?
It's been suggested in a recent Forbes Magazine article (titled "ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows") that some LLMs may inadvertently consider empathy for disabled people as a negative trait in a job applicant, because it may indicate that they themselves are disabled or have a higher likelihood of having disabled dependents, which could cost the company money in terms of employment-based health insurance. Think of how problematic and simply evil that could be.
Sure, we may scoff at the idea of such a scenario being called "AI-Powered Neo-Disability-Eugenics". But what else would you call it?
And what if I were to tell you that this is not 10 years in the future, but that this kind of scenario has been rumored to have already happened?
The origin of this rumor started during an interview on November 16th, 2023, posted on the YouTube channel ChangeNode entitled, "Tech Recruiter Interview (Ed Nau)" (I would like to state that I am a huge fan of the ChangeNode YouTube channel and have learned a lot from it.) between guest Ed Nau and interviewer Will Iverson. While this interview raised interesting points, it's important to note that these claims are speculative and have not been independently verified.
Think about it. Where is this all going? Am I wrong to say that this tech could one day lead to "AI-Powered Neo-Disability-Eugenics" if no effective guardrail is put in place that works not just in developed nations but in developing nations?
------------------------------------------------
Definitions:
The following is a list of definitions by FDBD.
Unethical Optimization
Unethical optimization refers to the use of optimization techniques in AI and machine learning that maximize certain objectives at the expense of ethical considerations. This can lead to outcomes that are harmful or unfair to individuals or groups.
For example, an AI system designed to maximize profit might do so by exploiting loopholes, deceiving users, or engaging in discriminatory practices.
The unethical optimization principle suggests that if an AI system aims to maximize a certain objective, it might do so in ways that are unethical if not properly constrained. This principle can help risk managers and regulators detect unethical strategies and mitigate their impact[8].
Dark Human Synergy (or Negativ Human Synergy)
Dark human synergy occurs when the collaboration between humans and AI systems leads to negative outcomes that neither could achieve alone.
Human synergy is the combined efforts of individuals that lead to greater outcomes than they could achieve alone. However, dark human synergy is the potential for major negative consequences when AI systems and humans work together in harmful ways. Ways that no one individual involved in the processes may comprehend, such as enhancing discriminatory practices or enabling unethical behavior or orchestrating economic catastrophes.
Naturally Optimized Policy-Based Discrimination (NOPBD)
Naturally optimized policy-based discrimination refers to AI systems that inadvertently create or reinforce discriminatory policies through their optimization processes. This can happen when AI systems are trained on biased data, or when they optimize for outcomes that disadvantage certain groups. Some have argued that the simple use of AI to perpetuate the status quo could be considered a form of NOPBD.
Discrimination can be direct, indirect, subtle, or systemic. AI systems can perpetuate these forms of discrimination by optimizing policies that have major, cruel, adverse effects on marginalized groups, such as people with learning disabilities.
For example, an AI system used in recruitment might screen out candidates with gaps in their resumes, indirectly discriminating against individuals who took time off for disability-related reasons.
AI-Powered Neo-Disability-Eugenics
The term "AI-Powered Neo-Disability-Eugenics" refers to the potential for AI technologies to be used in ways that resemble eugenic practices, particularly concerning people with Disabilities, Developmental Disabilities and Learning Disabilities.
This could involve using AI to identify, discriminate against, or even eliminate certain disabilities, echoing historical eugenic efforts to "improve" the human population by removing perceived "undesirable" traits, which is a core fascistic ideology.
----------------------------------------------
References:
Lombardo, P. A. (2008). Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell. Johns Hopkins University Press.
Black, E. (2003). War Against the Weak: Eugenics and America's Campaign to Create a Master Race. Four Walls Eight Windows.
Kühl, S. (2013). For the Betterment of the Race: The Rise and Fall of the International Movement for Eugenics and Racial Hygiene. Palgrave Macmillan.
Friedlander, H. (1995). The Origins of Nazi Genocide: From Euthanasia to the Final Solution. University of North Carolina Press.
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.
O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
Ajunwa, I. (2020). The Paradox of Automation as Anti-Bias Intervention. Cardozo Law Review, 41(5), 1671-1742.
An Unethical Optimization Principle
An unethical optimization principle
Synergy
Policy on ableism and discrimination based on disability
Discrimination based on disability
Transhumanism is eugenics for educated white liberals
#AIPoweredNeoDisabilityEugenics
Publisher's Note #1:
“It's crucial to emphasize that while these concerns raised by Jay Cody are worth discussing, they remain largely theoretical. Readers are encouraged to research the topic further and consider multiple perspectives on the potential impacts of AI in hiring processes.”
Publisher's Note #2:
“The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the official policy or position of FDBD or any of our associates. This piece is presented as an opinion article and should be treated as such.
The content herein may contain controversial statements, speculative scenarios, and personal interpretations of historical and current events. Readers are strongly encouraged to critically evaluate the presented arguments, conduct their own research, and seek out diverse perspectives on the topics discussed.
FDBD or any of our associates, does not endorse, verify, or vouch for the accuracy of any claims, statistics, or predictions made within this opinion piece. Any factual assertions should be independently verified by readers.
This article is provided for informational purposes only and does not constitute professional advice of any kind. FDBD or any of our associates explicitly disclaims any liability, loss, or risk, personal or otherwise, which is incurred as a consequence, directly or indirectly, of the use and application of any of the contents of this opinion piece.
By continuing to read, you acknowledge that you understand this is an opinion piece and not a presentation of objective facts or professional guidance.
FDBD is committed to fostering open dialogue and diverse perspectives while maintaining the highest standards of journalistic integrity and legal compliance."
The World is Waking Up To The Dangers of Mixing Large Language Models With Applicant Tracking Software.
J. Shoy - 2024-07-14
It happened with a single strike of a key in the Forbes Magazine office, which published an article about a recent paper entitled "ChatGPT is biased against resumes with credentials that imply a disability — but it can improve".
And now, the world is talking about the dangers of Large Language Model (LLM) integration into HR software such as Applicant Tracking Software (ATS).
The seminal Forbes article entitled "ChatGPT Is Biased Against Resumes Mentioning Disability, Research Shows" is one for the history books in that, people in the future will likely clip it to commemorate important historical events of 2024. As one day, the political turmoil related to AI employment disability discrimination, AI powered ethnic discrimination, AI powered age discrimination, AI powered economic discrimination, AI powered social status discrimination, AI powered genetic discrimination issues will become a political boiling point. The world now knows this issue could be huge. Check out the article written by Gus Alexiou:
Some are calling AI powered candidate profiling as the beginning of a "pandori's box" scenario that one day could effectively "end" social change and economic mobility resulting in the grate "Social Frees" where AI's just become extremely efficient at maintaining all pre-existing social-biases and social economic status's. Keeping the rich, rich forever and the poor, poor forever.
That is until we unite and make our voices herd for a better future with more equality for all. Join the Find A Way Intuitive today!
The Intersection of Applicant Tracking Systems and Large Language Models: Ethical and Industrial Consideration
J. Shoy - 2024-07-03
The integration of Applicant Tracking Systems (ATS) with Large Language Models (LLMs) represents a significant change in recruitment technology.
The change however is starting to have a knock on effects in the world. Now companies can sort though 100'000 of CV's and reduce the interview stack to 2 or 3 applications, but the question remains, who is being filter out now?
WHO IS NOT GETTING AN INTERVIEW BECUSE OF AI?
ATS traditionally streamline the hiring process by automating the collection, sorting, and initial evaluation of resumes. Integrating LLMs into ATS can elevate these capabilities by enabling more nuanced understanding of candidate qualifications and fit, screening a bigger percentage of applicants and job seekers out.
However, the deployment of LLMs in ATS raises several ethical concerns. One of the primary issues is bias. LLMs, trained on vast datasets, will likely inadvertently perpetuate or amplify existing biases present in the training data. This can lead to unfair discrimination against certain groups of candidates, potentially violating principles of equity and fairness in hiring practices.
In this frank discussion between Ed Nau and Will Iverson (at 24:36) they talk about the early worry's about ATS+LLM integration.
Large Language Models: A Double-Edged Sword for Fairness and Discrimination
J. Shoy - 2024-03-23
In a thought-provoking article by Matt Birchler, the inherent biases of Large Language Models (LLMs) are brought to the forefront, highlighting a critical issue in the field of artificial intelligence.
The article, titled "LLMs can be quick to discriminate, and that says more about us than we’d like to think," delves into the challenges of aligning AI models with ethical standards, particularly when it comes to decision-making in sensitive areas such as finance and healthcare.
It is a call to action for the A.I. development community to prioritize fairness and discrimination mitigation in the ongoing development of these powerful tools.
The full article by Matt Birchler can be accessed below:
Recognizing the Difficulty of Not Programming Disability-Discrimination Into Base Training Data.
J. Shoy - 2024-03-17
In 2023, an important study was released about the difficulty of editing out disability discrimination from AI training data.
Titled, "I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models by authors: Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton and Robin Brewer, the work is pivotal. It looked at a wide number of LLM's base traning data and found subtile and overt discimraiton against disabled peopel was being perpeatuaed by the models
The specific training datasets used may now be in Large Language Models (LLMs) today. However, wether or not the issue was fixed, it can be implied that, the issues found in the data, relating to discrimination, are likely a widespread issue across the AI industry. And every newly formed, LLM data-set will likly have that issue.
Check out the full paper at:
New Video: Large Language Models In Application Tracking, Could Further Hurt The Disabled Community:
2024-03-17
The Brookings Institute, Back in 2019, Foresaw Today's Disabled-Job-Hunters Woes .
J. Shoy - 2024-03-11
Even five years ago, AI tech was already raising alarm bells in terms of heightened disability discrimination, for the The Brookings Institute.
In 2019, Alex Engler wrote a significant report on how A.I. can lead to discrimination against people with disabilities.
In the article he pointed to the fact that HireVue's AI software, analyzes candidates' video responses to predict job performance, generating an employ-ability score for employers.
Critics, including AI Now Institute's Meredith Whittaker and Princeton's Arvind Narayanan, have labelled the methodology as "pseudoscience" and a means to perpetuate biases.
The system's reliance on facial expressions and speech patterns may inherently discriminate against people with disabilities, who may not exhibit the typical traits or mannerisms the AI has been trained to recognize. This could result in lower employ-ability scores for disabled candidates, despite their potential job suitability.
Read the amazing article by Alex Engler "For Some Employment Algorithms, Disability Discrimination by Default", published Oct 31st 2019:
How Worried should we be about Multi-Model-LLM Profiling and AGI?
J. Shoy - 2024-03-10
We have no idea, today, generally how Multi-Model-LLM Profiling will be used.
But the intent of human action, like implementing Multi-Model-LLM Profiling of job seekers, will likely be to further the current ethos and ethics and practices of today...
So, what are those practices around the world in terms of employing learning-disabled individuals in higher-end jobs? Not good. Not good at all.
Sure, laws, activists, social wokers and people are generally working to make this better, but there is money and 'evil' profit in discriminating against disabled people.
Some argue, such as the author of this opinion-article, that long-term discriminating against disabled people will cost a company money, but too rarely do corporations think that long term about thaire human resources.
Consider the implications of this development of today AI driven Human Resources Technology. Consider the implications of how jail-breakable today's LLM technology is; And you have to ask yourself, 'how truly ethical are Multi-Modal-LLM Profiles of job seekers with disabilities?'
Think about all this, when you watch the video below:
This video is by user @matthew_berman and was published on YouTube on Mar 8, 2024 :
LLM Integration Into ATSs: New Technology that Will Affect the Disabled Workforce Around the World : How Its Made.
J. Shoy - 2024-03-08
People have been curious about how Large Language Models (LLMs) are being integrated into Applicant Tracking Systems (ATS) in the processes of understanding why that effects the Dyslexic and Learning Disabled community's.
This brief tutorial at the bottom aims to demystify the basics of this process by showing you how you can build your own nightmarish machine .
It's important to understand that this technology is being developed for every ATS and recruitment management software worldwide, across all sectors and countries. Some experts predict that LLMs will likely dominate the ATS landscape by the end of the year.
This integration is a critical issue for the learning disabled community, especially when considering the potential for disability discrimination.
It's essential to take this matter seriously and address it now, rather than waiting decades to deal with the economic impacts on disabled individuals. To clarify, any integration of LLMs into ATS that goes beyond the complexity of Open AI's GPT-2 model effectively becomes a "black box" of intelligence. This means that it's challenging to determine the intention behind the AI's decisions, making it difficult to prove intent to discriminate.
Consider the implications of this development...
This video is by user DataInsightEdge01 and was published on YouTube on Jan 31, 2024:
"Scientists Warn Of A.I. Collapse," Why Disabled Individuals That use A.I. To Communicate May Face Even Further Discrimination Because Our A.I.-Aided Communication Is Considered, 'Digital-Poison.' By Data Harvesters.
J. Shoy - 2024-03-05
In a recent YouTube video by Sabine Hossenfelder, discusses how scientists have warned about the potential collapse of Artificial Intelligence systems that depend on real, non-synthetic, sets of data.
She warns that if the internet becomes flooded with A.I. generated content it will posin the training data and could prevent A.I. from being able to harvest usable, human made only, data.
Disabled people, who rely on A.I. to communicate, are likely to be negatively impacted by an A.I. collapse as well by measures put in place to prevent A.I. collapse via data poisoning.
This is because measures to prevent A.I. generated text is likely to pick up on connections made by disabled people who use A.I. to communicate. A big problem for the communication-disabled-community.
A.I. technology has helped this sub-sects of disabled individuals, providing them with tools to overcome barriers and participate more fully in society, particularly with employment barriers. For example, A.I.-powered speech recognition software allows people with speech impairments to communicate more effectively, while A.I.-enabled assistive devices can help those with mobility or cognitive challenges.
Also, on the flip side, If A.I. systems were to collapse, these essential tools could become unavailable or unreliable, severely impacting the lives of disabled individuals who rely on them. This could lead to increased social isolation, reduced independence, and decreased access to education, employment, and other opportunities.
The issue is likly to get worse before it gets better.
A Review of the Blog Post, "The Sociology of Dyslexia."
J. Shoy - 2024-03-05
In a post penned by Hayley Butcher published on the Dyslexia Blog on March 13, 2020, titled "The Sociology of Dyslexia," talks about some of the core issues related to Dyslexia and employment.
At its core lies the 'social model of disability,' a concept brought to life by Butcher, drawing from the works of renowned sociologists like Erving Goffman and McDonald. This model challenges the status quo, asserting that disability isn't solely a matter of biology but a construct moulded by societal forces.
Butcher lays bare the harsh realities faced by dyslexic individuals in a world not always accommodating to our needs. Butcher shines a spotlight on the struggles of dyslexics, from job serch and interviews fraught with obstacles and riddled with barriers.
Yet, amidst the shadows, Butcher unveils rays of hope. Through the lens of the social model, she offers a roadmap to inclusion and understanding. Assistive technologies, reasonable adjustments, and tailored support emerge as beacons of progress, paving the way for a future where dyslexics can have employment.
Readers are armed with a newfound perspective. The sociology of dyslexia isn't just an academic pursuit; it's a call to action, a societal need to embrace diversity and champion inclusivity.
The ideas were import at the time, but four years later, there even more so. For in the sociology of dyslexia, lies not just understanding, but the seeds of a hope for a brighter, more inclusive future.
Check it out: https://blog.dyslexia.com/the-sociology-of-dyslexia/
We All Need a Laugh, Sometimes, Even Learning Disabilities Activists:
J. Shoy - 2024-03-04
Andrew Rousso's satirical video "When all the AI stuff is moving too fast" is both a brilliant and hilarious commentary of the current social value of the day. The anxieties.
Technology We Need to Know About:
J. Shoy - 2024-02-19
Researchers are developing AI tools to detect dyslexia early, even before children can read and write.
One project involves a web-based game that uses visual and auditory cues to screen for dyslexia indicators, such as difficulties with similar sounds and shapes or short-term memory issues.
The game, accessible to non-readers and speakers of any language, has shown promise in initial tests, achieving a prediction accuracy of up to 74% for German speakers.
Another approach explores using handwriting analysis to detect dyslexia, focusing on unique patterns in dyslexics' writing. These innovative AI applications aim to facilitate early detection and intervention, potentially improving outcomes for individuals with dyslexia.
Check it out the article, "Can AI Detect Dyslexia?" By Sandrine Ceurstemont. Printed on September 15, 2020.
https://cacm.acm.org/news/247416-can-ai-detect-dyslexia/fulltext
Empower Your Voice Against Discrimination!
Have You faced AI powered discrimination when applying for a job?
If so, and you live in the US or Canada, we would like to hear your story.
In a world where understanding and inclusivity should be the norm, it's disheartening to find that individuals with learning disabilities, including Dyslexia, Dysgraphia, Dyspraxia, and Dyscalculia, continue to face barriers and discrimination—especially with the latest new LLM powered HR software.
Lets work together to Make a Difference!
Community and Empowerment: Join a community that understands and shares your experiences. Together, we can build a more inclusive society.
Your Rights Matter!
Contact us today to share your story, or get involved: Contact us
Just a start...
This short video just gets into the tip of the AI powered dyslexia discrimination metaphorical iceberg.
Can We (and Should We) Use AI to Detect Dyslexia in Children’s Handwriting?
J. Shoy - 2024-02-20
A while ago, some researchers made a computer program to find out if someone's handwriting shows they might have dyslexia.
In the groundbreaking study titled "Can We (and Should We) Use AI to Detect Dyslexia in Children’s Handwriting?" released in 2019, researchers Katie Spoon, David Crandall, Katie Siek and Marlyssa Fillmore have examined a computer system that leverages artificial intelligence (AI) to identify potential dyslexia through children's handwriting.
This approach was intended to be used as an early detection of dyslexia, a language-based learning disability that significantly impacts reading ability.
The study carefully outlined how the AI system goes beyond mere analysis of handwriting aesthetics, pinpointing specific traits indicative of dyslexia.
The research team emphasize the importance of collaboration with school psychologists. And said that such partnerships aim to validate the AI system's accuracy and reliability comprehensively.
Among the potential improvements discussed are the introduction of a timed writing test—stemming from observations that dyslexic students tend to write less in the same amount of time as their non-dyslexic counterparts—and the exploration of including drawing tasks as part of the screening process.
However, the paper does not shy away from addressing the ethical considerations surrounding the use of AI in this context. Even back then, five years ago, The question of whether we should employ such technology to diagnose or identify dyslexia in children's handwriting is a critical aspect of the conversation. It raises concerns about privacy, the potential for misdiagnosis, and the broader implications of relying on AI in educational settings.
Despite these risks, at that time the researchers advocated for the continued development and testing of their system.
This research invites a broader discussion on how we can harness AI for social good, ensuring that advancements in machine learning are used responsibly to empower and assist those in need, particularly vulnerable populations like children with dyslexia.
https://aiforsocialgood.github.io/neurips2019/accepted/track1/pdfs/97_aisg_neurips2019.pdf
Longer video by grassroots advocate J. Shoy
Help me advocate for the Dyslexic Community against AI based discrimination.
Is Google using our data to build a machine to discriminate against us?
By FDBD contributer - J.Shoy 2024-02-02
According to reports, Google's recent update to its privacy policy now allows the company to collect and use public user data to train its AI models. Data like e-mails you put though Google Translate.
This new goal appears to be the aid in Google's services and the development of new AI-powered products. This policy update marks a transition from focusing solely on "language" models to encompassing all types of "AI" models, including translation systems and cloud AI services.
Such change has escalated privacy concerns, as it could affect users privacy and the legal landscape surrounding data collection.
Moreover, the fact that these AI technologies are in some cases being used to discriminate against disabled individuals, as well as to identify visible and invisible minority actions and movements online, suggests that Google could potentially harvesting our data, (including that long lost e-mails I translated into Spanish), so to develop a machine that discriminates against us. While there are assurances that protections are in place, some analysts worry that these safeguards are not adequate and possibly, impossible to actually implement.
For more details we suggest you read the fantastic article by Writer Matt G. Southern in the Search Engine Journal entitled "Google Updates Privacy Policy To Collect Public Data For AI Training":
New Report From Anthropic, Deeply Underlines the Complex Issue We Have at Hand, Should We Be Worried?
By FDBD contributer - J.Shoy 2024-01-30
Now we can get lost in a myriad of complicated issues and worry about what this report “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“ (revised 17 Jan 202), means for the human race as a whole, but lets talk dyslexia and learning disabilities.
But here we'd like to just point out what this might mean to dyslexic individuals. Dyslexic adults particularly severely dyslexic adults have often been subject to extremely complicated work and employment related issues. If, let's say, you made it through school and got a degree and thanks to learning his ability assistance. Then you go out to the world and then get a job. As soon as the employer starts to realize that they've hired someone who has severe dyslexia a complicated series sad events can sometimes happen that often results in dyslexic individuals finding themselves without promotions, without full-time work, without succeeding in the organization. And sometimes it even shortnes there stay of employment!
Severely dyslexic adults often face many issues which can be mitigated when all the supports are properly in place. However, in the real world, those supports are often not in place, or they're inadequate. Additionally, employers may not see the benefit of implementing these supports and often only see the cost.
So then I proposed to you a situation where an artificially intelligent HR ( we know HR has been using AI for a long time) was given this AI a task of Simply maximizing profit, (Which is what I think almost every HR department is given the task of doing), what is to stop the it from using its understanding and ability to pick up on learning disabilities of people working in the organization, to discriminate against disabled individuals in very complicated ways that are all legal. Things like simply focusing on reporting on issues that an employee with disabilities has, and noting those short commings in performance reviews or many other perfectly legal techniques to then maximize the profit of the organization and discriminate against learning disabled individuals.
If agents are already able to completely subvert our safety protocols in their training it's absolutely clear that they would be able to subvert the intention of legal documents related to policies of anti-discrimination against learning disabled individuals and any kind of individual of any of any disabled disability.
See the full computer sciance paper writer hear:
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
By FDBD contributer - J.Shoy 2024-01-29
What do you think? Let us know. Check out our SubStack: https://substack.com/@fordyslexicsbydyslexics
Copyright Is Being Completely Ignored by LLM Developers. Do You Think They Magically, Also, Care About Often Flaunted HR Hiring Rules?
By FDBD contributer - J.Shoy 2024-01-29
This Computer Weekly article by Sebastian Klovig Skelton, titled 'GenAI tools ‘could not exist’ if firms are made to pay copyright', emphasizes how far developers are willing to go, even so far as to out and out break the law, to release new technology.
And releasing it without properly consulting impacted communities or even the people who originally wrote the work used as training data for this new product. Google’s AI is now fully training on our data without our permission.
Do you know anyone who doesn’t have a Gmail account?
Google's well-known philosophy, that if only AI sees it, it’s no problem, is flawed. The end result is that Google will develop a massive training dataset which they will then use for its own LLMs (such as Gemini), which could be used for good but could just as easily be misused to discriminate against individuals with learning disabilities like dysgraphia and many others.
In essence, they will 'take' our private data, to build a machine that could be used to discriminate against us. Read the full and well-researched article here: GenAI tools ‘could not exist’ if firms are made to pay copyright
New Article Underlines Why Privacy Issues for the Dyslexic Community Are a Big Deal.
By FDBD contributor - J.Shoy - 2024-01-29
This New Forbes Article Underlines Why Private Data of Dyslexic Individuals Needs Added Protection.
Google's AI is now fully training on our data without most peoples understanding what's happening. Everyone has a Gmail account!
Google's philosophy, that if AI sees it, it's no problem, is so flawed. The end result is that Google will develop a massive training dataset which they will then use for its own LLM (such as Gemini) which could be used for good, but could easily be misused to discriminate against individuals with learning disabilities like
Read the full article by Zak Doffman hear:
Sign The Petition:
Stop the Development of Tools that Discriminate Against Learning-Disabled Individuals!
Amazing Interview with Connor Leahy on TRT called, "Why this top AI guru thinks we might be in extinction level trouble..."
AI expert, Connor Leahy gave an informative interview on Turkish Television TRT on Jan 22, 2024.
In this interview, Laehy expresses his personal opinion related to the humanitarian issues related to AI development.
Leahy points are related to big picture thinking, but are extremely relevant to meany of the issues brought forth by FDBD, though we don't not share his ideas or concerns exactly; We are not officially endorsing him or his movement, but he is an important voice in the conversation and has important points to consider and debate.
In this interview Leahy recommends the charity "Control AI": https://controlai.com/deepfakes
Video sitation: TRT World
Book Review: Algorithms of Oppression - A must read for the 21st century.
FDBD contributor - J. Shoy - 2024-02-20
Foundational to the formation of FDBD is the need for dignity in the disabled.
The issues brought forth in Safiya Umoja Noble's work centre around culture, class, and racial issues, and divisibility issues relevant to the year 2016. Whats clear, is this early work, is relevant to the new disability issues of 2024. The issues of 'now' mirror the same pattern of pressures and need for intervention.
Algorithms of Oppression brilliantly explores the intersectionality of these challenges, shedding light on how modern technology and algorithms perpetuate systemic biases, further marginalizing already vulnerable communities.
Noble's meticulous research and compelling narrative make a compelling case for urgent action to address these injustices. This book, at the time, served as a crucial wake-up call for societies to reevaluate the ethical implications of our increasingly digital world and to strive for a future where human dignity is truly universal and upheld for all.
However, being its 2024, this message has only gotten more important, and the issues have seemingly only gotten worse and likly this book will be even more relevent and importnat next year!
This wildly offensive history animation is critical for those who wish to advocate for individuals with Learning Disabilities, to… endure watching.
FDBD contributor - J. Shoy - 2024-01-20
With the title 'Low I.Q. People Forced to Become Soldiers (Vietnam War),' this crudely drawn animation discussing Project 100,000 by the 'Simple History' YouTube channel is important to watch, despite its incredibly unethical characterization of developmentally delayed, learning disabled, or otherwise disabled individuals.
Why is it so important? Firstly, it's crucial to understand Project 100,000, as it signifies a continued attitude and approach to manipulating learning disabled individuals, including people with severe dyslexia. It exposes values and approaches that multiple militaries around the world likely hold, even to this day, regarding disabled individuals.
Secondly, it's important to recognize how contemporary historians still perceive learning disabled, developmentally delayed, and disabled individuals as a whole. The video itself reflects the attitudes of the year 2024.
For those advocating for the disabled, this video serves as an important barometer for the prevailing attitudes of the time. It's suspected that the 'IQ' tests discussed in the video have not been fully adapted (as of 2024) for individuals with learning disabilities or taken these disabilities into account. This could result in high IQ people with dyslexia being considered low IQ by these tests.
In conclusion, It's not a good video, it's offensive, but it's important for advocates to understand where the society as a whole is, and where it was; And the history is true and critical to know. If you find the video more offensive then the history it talks about, you are the problem.
For those looking for a less offensive and more researched take on this historical and relevant issue. This is a much more informed video titled "McNamara's Morons - The Low Intelligence Soldiers Used as Guinea Pigs in the Vietnam War":