3.2 Terminology » Glossary
- adversarial attack inputs to AIs that are intentionally designed to generate an incorrect or faulty output
- barrier of meaning refers to a comparison between humans and AIs in that humans are able to understand whereas current AIs do not possess understanding
- binary classifier an algorithm that detemines whether an input belongs to a specific class based on a rule that identifies shared characteristic(s); makes predictions based on a linear predictor function
- brittleness the property of an algorithm to fail when confronted with unusual data or data that differs from expectations in a seemingly negligible way
- brute-force in computation, a means of solving a problem that relies on exhaustive search through all possible solutions rather than optimizing a solution for efficiency; brute-force approaches are limited by computing power
- confidence in machine learning, the AI's estimate of the probability that the solution it finds to a particular input is indeed correct
- deep learning. a type of machine learning that makes use of artificial neural networks to approximate the complexity of learning that occurs in the brain
- embodiment the physical nature of the mind as manifest by its integration with the sensory body and interactions with the environment; an extension of embodiment is the concept of embodied cognition, which suggests that many features of cognition are only possible when the brain exists within a sensory body and thus questions whether an AI without the ability to sense its complete environment could ever be "intelligent"
- error rate the proportion of incorrect answers or solutions to a task, such that smaller numbers indicate better performance
- human performance the average success rate or score of humans on a particular task that is used as a benchmark for AI performance on that task; often crowdsourced measurement through a platform such as Amazon's Mturk
- machine learning a computational approach in which algorithms improve performance ("learn") by sampling numerous solutions and selecting for those that yield correct or useful output; the algorithm finds patterns in training data that match characteristics of the input data to the output the user would like to accomplish
- neural network (or artificial neural network) a series of interconnected computational units ("neurons") that pass information to each other; the network is connected in a heirarchical, layed structure, starting with input units and ending with output units and each connection between individual units has a changeable weight, such that a higher weight contributes more information to the recieving unit
- parallel computing solves computationally large problems by dividing them into smaller, independent, problems that can be executed simultaneously by multiple computers and the results combined upon completion
- perceptual category a representation of the qualities/characteristics shared by a set of similar but not identical inputs; perceptual categories are a means organizing individual experiences into generalizable cognitive chunks
- primitives in computer science, a built-in data type that is fully defined and does not need to be learned
- probabilistic programming an approach to computation in which the user specifies probabilistic models, the outcomes of which are inferred by the machine; variables in the models consist of probabilistic distributions rather than, for instance, binary yes/no variables
- representation in computer science, the form in which data is stored, processed and transmitted; data must be digitized in order for a machine to compute something about that data
- training data in machine learning, a dataset that contains the intended output, for instance, if the machine should identify Mozart arias from their scores, the dataset must contain the scores of Mozart arias that are labeled as Mozart arias for the machine; a rich training dataset with a variety of both positive and negative examples (e.g. many Mozart arias as well as arias by other, non-Mozart composers) will generally improve performance on real-world data