History and Key Terms for Artificial Intelligence
The genesis of Artificial Intelligence (AI) began in 1944, when two University of Chicago researchers proposed the concept of a neural network (NN) as a computer system modeled on the human brain and nervous system. These originating neural networks laid the foundation for more sophisticated artificial neural networks (ANN), machine learning, and deep learning models.
Today’s nomenclature surrounding AI includes several important terms:
- Neural Network: A computational model composed of interconnected nodes (“neurons”) organized in layers, inspired by—but not biologically equivalent to—the human brain, used to learn patterns and relationships in data.
- Artificial Intelligence (AI): The field of creating systems that can perform tasks typically requiring human intelligence, such as perception, language understanding, reasoning, learning, and decision-making.
- Self Learning: The capability of a system to improve its performance over time by learning patterns from data or experience, either through machine learning methods or adaptive algorithms, without manual rule updates.
- Natural Processing Language: Field of artificial intelligence that enables computers to understand, interpret, and generate human language in a meaningful way.
- Model: A mathematical representation learned from data that maps inputs to outputs, enabling predictions, classifications, or decisions based on patterns observed during training.
- Supervised Training: A training approach in which a model learns from labeled data, where each input is paired with a known correct output.
- Unsupervised Training: A training approach in which a model learns patterns, structures, or relationships from unlabeled data without predefined output targets.
- Artificial Neural Network (ANN): A type of machine learning model consisting of layers of interconnected artificial neurons that adjust internal parameters during training to learn relationships between inputs and outputs.
- Machine Learning: A subset of AI focused on developing algorithms that learn from data and improve performance on tasks without being explicitly programmed with task-specific rules.
- Convolutional Neural Network (CNN): A specialized type of neural network designed to process grid-like data, such as images, by automatically learning spatial features through convolutional layers.
- Recurrent Neural Network (RNN): A class of neural networks designed to handle sequential data by maintaining internal state, making them suitable for tasks such as language modeling, speech recognition, and time-series analysis.
- Robotic Process Automation (RPA): Software technology that automates structured, rule-based digital tasks by mimicking human interactions with applications, typically without using machine learning.
- Deep Learning: A subset of machine learning that uses deep neural networks with many layers to learn complex representations from large amounts of data.
- Computer Vision: A field of artificial intelligence that enables machines to interpret, analyze, and derive meaning from visual data such as images and video, including tasks like object detection, image classification, facial recognition, and scene understanding.
- Predictive Analytics: Analysis of historical data, statistical methods, and machine learning to forecast future outcomes and trends.
- Explainable AI(XAI): Artificial intelligence techniques designed to make model behavior, decisions, and outputs understandable and interpretable to humans, enabling transparency, trust, and accountability.
- XGBoost: An optimized gradient boosting framework that builds ensembles of decision trees to produce highly accurate and efficient predictive models, widely used for structured and tabular data.