Reading
1. ELIZA
- Early computer programme to mimic human conversation through NL processing.
- NLP: ELIZA analyses user inputs based on decomposition rules triggered by specific keywords and responds using reassembly rules.
- Operation: Runs on the MAC time-sharing system at MIT, multiple users can interact simultaneously.
- Scripts: Functionality depends on scripts, keywords-based rules, that allows it to generate responses. Can be edited, expanded allowing for customisation and improvement
- Psychological Simulation: Can mimicking a therapist. Responding to vague statements, has the illusion of understanding
- Technical Challenges: Identifying key words, handling context, and responding when no keywords are detected
- Real communication requires more than just surface-level responses and pointing toward systems that could adapt and learn from conversations
2.1 Stochastic Parrots
- Raises concerns about the rapid expansion of LLM in NLP and the risks associated with their development and deployment.
- Environmental & Financial Costs - Training large models costs a lot of energy .
- Bias & Representation Issue - LLMs are often trained on large, uncurated datasets from the web. Models tend to overrepresent hegemonic viewpoints, racism, sexism etc
- Lack of True Understanding - LLMs do not achieve true NL understanding but only manipulate linguistic forms. Creates misleading impressions about the capabilities of these models. Can cause potential harm.
- Social and Ethical Implications - LLMs risk perpetuating harmful stereotypes and ideologies. Increasing potential for discrimination. Could be used for malicious purposes, such as extremist content or misinformation
- Recommendations - Paper advocates for shifting research priorities away from increasing the scale LLMs and more towards thoughtful approaches, such as curating datasets responsibly.
- While LLM have advanced NLP, their expansion presents substantial risks, authors call for more responsible development, especially on ethical design
Discrimination at the hands of others who reproduce racist, sexist, ableist, extremist or other harmful ideologies reinforced through interactions with synthetic language Large LMs exhibit various kinds of bias, including stereotypical associations
2.2 Talk about race
- Challenges chatbots face when engaging in conversations about race
- Blacklist and Race-Talks: Often rely on blacklists to block offensive language, however this method is limited. Can only filter out words indiscriminately
- Data Bias - Chatbots are trained on large datasets that often reflect racial biases. Microsoft's Tay chatbot becoming racist within hours of interacting online due to exposure to biased conversations
- Language Processing and Context - NLP struggles with the complexities of race because they are heavily focused on syntax and not the broader context of the conversation.
- ML Accountability - Deep learning models used in chatbots are often inscrutable. Difficult to understand or correct bias. Paper calls for more tuneable algorithms than can be adjusted to handle race more responsibly.
- Recommendations - Improve race-talk in chatbots. Create diverse, racially-conscious databases.
2.3 Man is to computer programmer
2.4 Word embeddings
2.5 Best friends are Linguists
2.6 Introduction to Linguistics
2.7 Unreasonable effectiveness of Data
2.8 Speech and Langue Processing
- Raise awareness of bias -> Explainable AI, debiasing, Statements
- Goals of debiasing -> Reduce bias in word embeddings by ensuring gender neutral. Maintain embedding utility by mainitaing definitional
- Training a classifier we really want -> minimise the training error in a way that makes the classifier generalisable, estimate the generalisation error of our classifier
- Problematic stereotyped biases ->occupation-gender
- Fundamental assumptions for building a classifier -> Training data is representative of the real world with respect to the task we are trying to achieve
HAI2024?!