A bias in AI algorithms results from faulty assumptions made during the algorithm development process or in the training data. In artificial intelligence, a bias manifests in a variety of ways, including ethnicity prejudice, gender bias, and age discrimination. Human prejudice – conscious or unconscious – lurks throughout the development of AI systems. Data scientists, too, can make mistakes from excluding valuable entries to inaccurate labeling to under- and-oversampling. Under sampling, for example, can produce skews in class distribution and cause AI models to completely disregard minority classes.