Artificial intelligence (AI) is a field of computer science that creates AI systems to perform tasks usually done by human intelligence. These tasks include learning from data, understanding language, recognising patterns, and making decisions in real-world environments.
Knowing about AI Fundamentals is becoming a necessity for everyone. Since AI is not experimental now, it is already implemented and functional. What are the parameters, the technology behind it, and its working pattern?
Moreover, are there any limitations, ethics, or what this technology is? If you only want a high‑level view of where AI fits into today’s tech landscape, start with how AI powers modern software and platforms. Too many questions and high-tech answers lead to nothing.
So, someone should guide you on AI fundamentals. Since people find it too techy, I propose a less techy approach. Reading this gradually will build up an AI understanding!
What Is Artificial Intelligence?
Artificial intelligence (AI) is a way of building intelligent systems that can perform human tasks. These tasks include learning from data, understanding language, recognising patterns, and supporting decision-making in real situations.
Most people already use AI every day in search engines, maps, spam filters, and product recommendations.
Where the Term “Artificial Intelligence” Comes From
The term artificial intelligence was first used in 1956 by computer scientist John McCarthy. He described it as the science of making machines act in ways that would seem intelligent if humans did them. The idea was not to create a thinking mind, but to create systems that could solve problems efficiently.
Why AI Exists (Speed, Scale, Accuracy)
AI exists because humans cannot process massive amounts of information fast enough. Machines can work at scale, reduce errors, and improve accuracy when making decisions.
Modern AI systems now support billions of decisions every day across the internet, from ranking search results to detecting fraud and optimising delivery routes.
Sci-Fi AI vs Real AI (What AI Is Not)
Movies show AI as conscious, emotional, or self-aware. Real AI does not think or feel. It follows logic, math, and rules learned from data.
AI mimics human intelligence in narrow tasks, but it does not have awareness, goals, or understanding of meaning.
Why AI Mimics Logic, Not Consciousness
AI learns patterns from examples. It does not understand them. When an AI system writes text or recognises images, it predicts what comes next based on data. This is why AI is powerful for automation, but limited in reasoning.
Human Intelligence vs AI Capabilities (Quick Comparison)
| Human Intelligence | AI Capabilities |
| Understand Meaning | Detects Patterns |
| Learn From Life Experience | Learns From Data |
| Use Common Sense | Use Probabilities |
| Can Adapt Broadly | Performs Narrow Tasks |
| Has Awareness | Has no Consciousness |
A Brief History of AI
Artificial intelligence did not appear suddenly. It grew slowly, failed many times, and only recently became useful at scale. The following timeline review will help you understand the AI journey:
Alan Turing’s Idea: Can Machines Think?
In 1950, mathematician Alan Turing asked a simple question: Can a machine imitate human thinking?
He proposed the Turing Test to measure whether a machine could behave like a human in conversation. This idea became the foundation of modern AI research.
Early AI and the First Failures (1950s-2000s)
Early AI systems relied on rule-based systems. Engineers manually wrote thousands of rules to guide machines. These systems worked in controlled labs but failed in real environments.
By the 1970s and again in the 1990s, progress slowed & funding dropped. These periods became known as AI winters because results could not match expectations.
The 2012 Breakthrough That Changed Everything
AI finally started working when three forces aligned:
- Massive data from the internet
- Powerful GPUs for training
- Better models like neural networks
In 2012, the ImageNet competition, a deep learning model, cut image recognition errors in half. This proved that machines could learn from data instead of hand-written rules.
Why AI Only Works at Scale Today
Modern AI depends on the data explosion of the last decade. Without billions of examples, models cannot learn with accuracy.
That is why AI now powers search, translation, recommendations, and automation. The ideas existed long ago, but only today do we have the data, computing power, and models to make them work in the real world.
Core Building Blocks of AI Systems (Foundation Layer)
Every AI system is built from a few simple parts. When these parts work together, machines can learn, decide, and act in the real world. Most confusion online comes from mixing these layers. Reading this section separates them clearly.
Algorithms (How Machines Decide)
An algorithm is a step-by-step method a machine follows to reach a result. It is the logic behind every decision an AI makes. There are two main types:
- Rule-based algorithms follow fixed instructions written by humans
- Learning AI algorithms adjust themselves by finding patterns in data
Modern AI relies on learning algorithms because real-world problems change too often for fixed rules to work.
Data (What AI Learns From)
Data is the raw material of AI. Without it, learning cannot happen. There are two types:
- Structured data: numbers, tables, labels (easy to process)
- Unstructured data: text, images, audio, video (harder but richer)
Data quality limits intelligence. If data is biased, incomplete, or noisy, the AI model will learn the wrong patterns, no matter how advanced the algorithm is.
AI Models (How Learning Is Stored)
An AI model is the result of training an algorithm on data. It stores learned patterns, not rules. Think of a model as a compressed memory of experience:
- The more data it sees, the better patterns it learns
- The more it is refined, the more accurate it becomes
That is why AI models improve over time without being rewritten from scratch.
AI Systems (How AI Models Work in Reality)
A model alone cannot do anything. A real AI system combines:
- An AI model (brain)
- Data (input)
- Hardware (compute)
- Interface (how humans or machines use it)
Example: Self-driving car
- Sensors collect data (cameras, radar)
- Models interpret the road
- Hardware processes decisions
- The interface controls steering and braking
Only when all parts work together does AI move from theory to real-world action.
AI Models vs Machine Learning Models (What’s the Difference?)
Many people use the AI model and the machine learning model as the same thing. They are not. Understanding the difference helps you judge what an AI system can and cannot do in reality.
AI Model (The Broad Category)
An AI model is any model designed to perform specific tasks that is repetition. This includes decision-making, rule-following, and pattern recognition.
Not all AI models learn. Some follow fixed rules written by humans.
For Example:
A rule-based spam filter that blocks emails using predefined rules is an AI model, but not a learning one.
Machine Learning (ML) Model (The Learning Subset)
A machine learning model is a type of AI model that learns patterns from data instead of rules. It improves when:
- It sees more examples
- Data quality increases
- Feedback is added
This is called pattern learning and is the core reason modern AI scales.
Example:
A model that learns to detect spam by analysing millions of emails is a machine learning model.
Why This Difference Matters
All ML models are AI models, but not all AI models are ML models. That is why some AI systems feel “smart”, and others feel rigid. Learning changes everything.
AI Model vs ML Model vs AI System (Simple Comparison)
| Feature | ML Model | AI Model | AI System |
| Learns From Data | Yes | Sometimes | Yes ( via Model) |
| Use Pattern Learning | Yes | Maybe | Yes |
| Can work without learning | No | Yes | No |
| Includes Hardware & Interface | No | No | Yes |
| Used directly by Users | No | Rarely | Always |
| Example | Image classifier | Rule-based Chatbots | Self-driving car system |
The Big Picture (Advanced Understanding)
- ML models learn patterns
- AI models define intelligence logic
- AI systems deliver real-world outcomes
When you use ChatGPT, a recommendation engine, or a driving car system, you are not using a model alone. You are using a complete AI system built on machine learning.
Machine Learning Explained (How AI Learns)
Machine learning is a part of artificial intelligence that allows systems to learn from data instead of following fixed rules.
Rather than being programmed step by step, the system studies examples, finds patterns, and improves its decisions over time. Modern AI can adapt, while older software could not.
At its core, machine learning works in three main ways, depending on how the data is given and how feedback is used.
- Supervised Learning
Supervised learning teaches a model using labelled examples. Each input already has a correct answer, so the system learns by comparing possibilities with the right outcome and adjusting.
Example:
If you show the model thousands of emails marked as “spam” or “not spam,” it learns the difference.
Best used when:
- You know the correct output
- Accuracy matters
- Past data is reliable
- Unsupervised Learning
Unsupervised learning works with unlabeled data. The system looks for patterns on its own without being told what is right or wrong.
Example:
An online store groups customers by buying behavior without any manual labels.
Best used when:
- Data is large and messy
- Patterns are unknown
- Human labelling is too slow or costly
- Reinforcement Learning (Driving Example)
Reinforcement learning teaches AI through trial and error. The system takes actions, receives rewards or penalties, and slowly learns what works best.
For example, Self-driving cars use this method:
- Safe driving = reward
- Sudden braking or drifting = penalty
- Over time, the system learns smoother driving
Best used when:
- The environment changes
- Decisions affect future outcomes
- Continuous improvement is needed
Machine Learning Types Compared
| Learning Type | Input Data | Output | Use Case |
| Supervised Learning | Labelled Data | Predictions | Spam filters, medical diagnosis |
| Unsupervised Learning | Unlabelled Data | Groups or Patterns | Customer segmentation, fraud discovery |
| Reinforcement Learning | Environment Feedback | Actions | Self-driving cars, Game AI |
Why This Matters
Most modern AI systems use a mix of these learning methods. Understanding how they work helps you judge AI capabilities, limitations, and why some systems improve faster. This is the foundation of search engines, recommendations, voice assistants, and automation tools you use daily.
Neural Networks (Why AI Thinks Differently Than Code)
A neural network is a type of AI model inspired by how the human brain processes information.
Instead of following fixed rules, it learns by adjusting internal connections based on results. Neural networks can recognise patterns that traditional code cannot. A neural network is built from layers of connected units that pass signals forward, one step at a time.
Brain Inspiration (But Not a Real Brain)
Neural networks take inspiration from neurons, but they do not think or understand. Each unit receives a signal, multiplies it by a weight, and passes it forward if it matters.
- Layers control how deep the learning goes
- Weights decide what information is important
- Signals carry information through the system
Learning happens when weights adjust after mistakes. Over time, the model gets better at seeing patterns.
Why Deep Learning Changed Everything After 2012
Before 2012, neural networks existed but failed at large tasks because computers were too slow and data was limited. In 2012, three things changed at once:
- Large datasets became available from the internet
- GPUs made training fast enough to handle millions of calculations
- Deeper networks learned complex patterns across many layers
The change powers the AI to:
- Recognise images better than humans in some tasks
- Understand speech with accuracy
- Translate languages at scale
- Power modern tools like search, assistants, and generative AI
This moment marked the rise of deep learning, which still drives today’s most capable AI systems.
Why Neural Networks Think Differently Than Code
Traditional code follows instructions written by humans. Neural networks learn the instructions themselves from data. That difference explains why:
- AI can spot patterns humans miss
- Results improve with more data
- Mistakes happen when the data is weak or biased
Neural networks are robust, but they only know what they have learned. They do not reason, feel, or understand meaning. They only recognise patterns at scale.
8 Things AI Can Do Today (Task-Based Understanding)
Modern AI does not “think” like humans. It performs specific tasks by learning patterns from data and applying them at scale. Almost every real-world AI system fits into one of the eight tasks below.

1. Prediction
AI uses past data to forecast what will happen next. Examples include weather forecasts, demand planning, stock movement signals, and traffic estimation.
Prediction works when patterns repeat, and enough historical data exists.
2. Classification
AI sorts data into defined categories. Examples include: Email spam filters, medical scans, fraud detection, and content moderation, all of which rely on classification.
The system learns from labelled examples and applies the same logic to new data. These classifications are the core ofAI in marketing automation systems.
3. Natural Language Processing (NLP)
AI processes and understands human language in text form. Examples are NLP powers chatbots, translation tools, search engines, and document analysis.
It allows machines to read, write, and respond in human language.
4. Computer Vision
AI interprets images and videos. It detects faces, reads X-rays, recognises objects, and monitors environments.
Computer vision helps machines “see” patterns humans might miss.
5. Speech Recognition
AI converts spoken words into text or actions. This task enables voice assistants, call transcription, and accessibility tools.
Accuracy depends on clear audio and diverse training data.
6. Anomaly Detection
AI finds patterns that do not match normal behavior. Banks use it to spot fraud, factories use it to detect equipment failure, and networks use it to find attacks.
This task works because AI learns what “normal” looks like first.
7. Clustering
AI groups similar data without labels. It helps discover hidden patterns in customers, documents, images, and research data.
Clustering is functional when humans do not know the categories in advance.
8. Generation
AI creates new content from learned patterns. This includes text, images, code, audio, and video.
Generative AI does not copy. It produces new output based on probability and structure.
Why These 8 Tasks Matter
Every AI tool you use today is built on one or more of these tasks. Understanding them removes confusion and makes AI easier to evaluate, trust, and use correctly.
What Is Generative AI? (The New AI Era)
Generative AI is a type of artificial intelligence that creates new content instead of just analyzing existing data. It can write text, generate images, produce videos, build code, and create human-like voice outputs by learning patterns from large datasets.
Due to this change, AI moved from automation to content creation at scale. Traditional AI systems focus on decisions. Generative AI focuses on creation. It does not think or imagine. It predicts what comes next, one step at a time, based on patterns learned from data.
Traditional AI vs Generative AI (Quick Comparison)
| Feature | Traditional AI | Generative AI |
| Main Purpose | Analyse & Decide | Create new account |
| Typical Output | Yes/No, Label, Score | Text, image, video, code, voice |
| Learning Method | Rules or Trained Models | Large language models (LLMs), generative models |
| Common Use | Fraud detection, search recommendations | Writing, design, coding, media |
| Example | Spam filter | ChatGPT, image generator, code assistant |
What Generative AI Can Create Today
Generative AI is already used for:
- Text generation (articles, emails, summaries)
- Image creation (designs, illustrations, photos)
- Video generation (short clips, scenes, ads)
- Code generation (functions, scripts, debugging)
- Voice synthesis (narration, assistants, dubbing)
Therefore, due to this reason, large language models became the centre of modern AI. They allow one system to handle many tasks, instead of building one model per problem.
How Generative AI Works
Generative AI works through a simple but powerful flow:
input → processing → prediction → output.
Behind the scenes, it uses math, probability, and massive training data. There is no creativity or awareness.
Input → Prompt → Inference → Output
Input
You give the system text, an image, or a command. The command is called the prompt.
Prompt
The prompt tells the model what to do. Clear prompts lead to better results.
Inference
The model predicts the next best token based on patterns it learned during training.
Output
The system returns text, images, code, or audio as a result of those predictions.
Everything happens in milliseconds. Too fast for a human to interpret.
Tokens, Vectors, and Embeddings
Generative AI does not read words like humans do. A token is a small piece of text (word or part of a word). Tokens are converted into numbers called vectors. Vectors are stored as embeddings, which represent meaning and relationships.
The overall process allows the model to understand that:
“cat” and “dog” are closer than “cat” and “car.”
Meaning comes from math, not language rules. Generative AI understands the outcome before initiating it.
Why Context Matters
A context window is how much information a model can remember at once.
- Small context = short memory
- Large context = better understanding of long conversations or documents
If the content goes outside the context window, the model cannot see it anymore. This is why long documents must be summarised or chunked for accurate results.
Why This Architecture Changed Everything
Earlier AI models solved one task at a time. Generative AI models use the same architecture to handle writing, coding, vision, and reasoning, all with one system. That flexibility is what defines the new AI era.
Inside Modern AI Models (LLMs, Vision Models, Agents)
Modern AI models are advanced systems that learn patterns from massive data and use them to understand, generate, or act across text, images, video, and real-world tasks.
Moreover, Modern artificial intelligence no longer works in isolated boxes. Today’s models handle language, vision, and reasoning together, which makes them useful across many domains.
Why transformers changed everything
Before 2017, models struggled to understand long contexts. Transformers changed that by letting models focus on relationships between words, images, or signals at the same time. AI gets faster, more accurate, and able to scale. Large language models (LLMs) like GPT are built on this idea.
How models work across domains
Modern models are often multimodal. They can read text, see images, hear audio, and combine signals to respond. A single model can write, analyse images, explain charts, or generate code without switching systems.
Agent-based AI (current trend)
In 2026, many systems will use AI agents. These models can plan steps, call tools, check results, and repeat actions until a task is complete. With transformation, AI now books travel, analyses data, or runs workflows on its own.
What Powers AI Systems? (Hardware, Compute, Cost Reality)
AI systems run on specialised hardware and massive computing power, which limits who can train them and explains why AI is expensive to build.
Behind every AI model sits a large physical infrastructure that most users never see.
Why GPUs matter
AI training requires billions of math operations. GPUs can run these calculations in parallel, which makes them essential for training and running large models. Standard CPUs are too slow for this work.
Why is training expensive
Training a modern model takes:
- thousands of GPUs
- weeks or months of runtime
- large amounts of electricity
These are reasons why training costs can reach millions of dollars for advanced systems.
Why only a few companies train models
Because of hardware, energy, and data needs, only a small number of organisations can train foundation models. Most companies now build on top of existing models instead of training from scratch.
How AI Models Are Trained (Lifecycle)
AI models are trained through pre-training, fine-tuning, deployment, and refinement to improve performance over time.
Moreover, training is a step-by-step process. The system learns patterns from data, gets adjusted for specific tasks, is released into real use, and then improves through feedback.
This lifecycle explains why modern AI models improve over time instead of staying static like traditional software.
1. Pre-training (Building the Foundation Model)
Pre-training is where learning begins. Engineers train a model on massive datasets such as text, images, code, or audio. The model does not learn facts. It learns patterns, relationships, and structure in data.
Combined, this creates a foundation model, a general-purpose AI that can perform many tasks but is not yet specialised.
Example:
A language model learns grammar, sentence structure, and meaning by reading large volumes of text.
2. Fine-tuning (Teaching Specific Behavior)
Fine-tuning adjusts a foundation model for a specific job. Engineers use smaller, high-quality datasets to guide the model toward:
- safer outputs
- better accuracy
- task-specific skills
Human feedback often plays a role here, especially in language models. This step explains why two AI models can behave differently even if they share the same base architecture.
3. Deployment (Real-World Use)
Deployment is when the model goes live inside an AI system. At this stage, the model connects with:
- apps
- APIs
- user interfaces
- automation workflows
That is where performance matters. Speed, cost, and reliability determine whether an AI model works in real products or stays in research labs.
4. Refinement (Learning From Feedback)
Refinement happens after real people use the model. Developers monitor:
- errors
- bias
- failures
- performance gaps
They update the model with new data or feedback loops to improve results. That is why modern AI systems evolve. They don’t stay frozen after launch.
Reality-Based AI Applications (Where You Already Use AI)
AI already operates behind search, navigation, payments, healthcare, and recommendations without visible user interaction.
You interact with AI systems dozens of times a day, even if you never open an AI app.
Invisible AI Examples You Use Daily
Search Engines
AI ranks results, understands intent, and predicts what you mean. Search engines and AI-powered result summaries use AI to interpret intent and rank information.
Maps & Navigation
AI predicts traffic, finds faster routes, and adjusts directions in real time. Navigation platforms uses AI to report traffic situations on the road.
Recommendation Systems
Streaming platforms, online stores, and social feeds use AI to predict what you will click, watch, or buy next.
Payments & Security
Banks use AI to detect fraud by spotting unusual patterns in transactions within milliseconds.
Marketing & Automation
AI decides which emails get sent, which ads appear, and which users see which offers, often automatically. Behind the scenes, AI also supports project planning, risk tracking, and resource management for teams.
Healthcare AI
AI helps doctors analyse medical images, detect risks, and prioritise urgent cases faster.
Finance AI
AI models assess credit risk, flag suspicious activity, and assist in algorithmic trading.
Why This Matters
AI is not a future tool. It is infrastructure. It works quietly, at scale, across millions of decisions per second. It handles the tasks humans cannot perform fast enough or consistently enough.
Limitations of AI (What AI Still Cannot Do)
AI systems optimize probability, not truth, which creates limits in reasoning, accuracy, and accountability, although it can recognize patterns.
Modern AI systems are powerful, but they are not autonomous thinkers. AI limitations fall into two categories: model limitations and deployment risks.
Model limitations come from how AI systems learn probabilities rather than understanding facts or causality.
Deployment risks arise when AI systems are used without human oversight, safeguards, or accountability.
1. AI Can Produce Confident but Incorrect Outputs (Hallucinations)
AI models generate responses by predicting the most likely next token based on training data. They do not verify facts or check reality by default.
When a model lacks sufficient data or context, it may fill gaps with plausible-sounding information. The phenomenon is known as a hallucination.
Why does this happen technically:
The model optimizes for linguistic probability, not factual accuracy. Why it matters:
In high-stakes domains like healthcare, law, and finance, an incorrect answer that sounds confident is more dangerous than no answer at all.
2. AI Reflects Bias Present in Training Data
AI systems learn from historical and human-generated data. If that data contains bias, imbalance, or systemic inequality, the model will reproduce those patterns. This has been observed in:
- Resume screening systems
- Facial recognition accuracy across demographics
- Sentiment and language interpretation
Why does this happen?
AI does not understand fairness or ethics. It learns correlations, not values. Why it matters:
Without human review and correction, AI can reinforce existing social and economic bias at scale.
3. AI Has Fundamental Reasoning Limits
AI can follow learned logic patterns, but it does not reason in the human sense. It cannot form independent understanding or causal awareness. AI struggles with:
- Abstract reasoning beyond training data
- Novel situations that require common sense
- Explaining why a conclusion is correct
Why does this happen?
Models do not build mental models of the world. They map inputs to outputs based on probability.
4. Context Windows limit AI
AI systems process information within a fixed context window. Anything outside that window is not visible to the model during inference. Even with larger context sizes in 2026, AI can still:
- Lose track of earlier instructions
- Miss long-range dependencies
- Misinterpret extended documents
Why this matters:
Long-term memory, continuity, and planning remain human strengths.
5. AI Is Dependent on Compute, Energy, and Cost
Advanced AI systems require massive computational resources. Training and running large models depend on:
- Specialized GPUs
- Large-scale data centers
- High energy consumption
Because of this, only a small number of organizations can train frontier models.
Why this matters:
AI progress is constrained by infrastructure, cost, and environmental trade-offs.
Why Human Oversight Is Non-Negotiable
AI does not possess judgment, accountability, or moral reasoning. Humans must remain responsible for:
- Verifying outputs
- Setting boundaries
- Auditing decisions
- Deciding when AI should not be used
AI works best as an assistive system, not a decision-maker.
Key Takeaway
AI is bounded. Understanding these boundaries is essential for using artificial intelligence safely, accurately, and responsibly in real-world systems.
Ethics, Safety, and Responsible AI (2026 Reality)
Responsible AI focuses on alignment, explainability, and human oversight in systems that affect real-world decisions.
AI systems now influence hiring, healthcare, finance, and security. Wrong decisions can affect people’s lives, ethics and safety.
Why alignment matters
Alignment ensures an AI system follows human intent. Without alignment, a system can produce outputs that are technically correct but socially harmful, misleading, or unsafe.
In practice, alignment means:
- Clear objectives defined by humans
- Guardrails around sensitive decisions
- Human review where risk is high
That is why regulated industries require human-in-the-loop systems, not fully autonomous AI.
The black box problem
Many modern AI models, complex neural networks, cannot fully explain why they reach a decision. This is known as the black box problem.
For example:
- A medical AI may flag a patient as high risk
- A credit model may reject a loan
If the reasoning cannot be explained, the decision cannot be trusted or audited.
Why explainable AI matters
Explainable AI (XAI) focuses on making AI decisions understandable to humans. This is critical for:
- Debugging errors
- Detecting bias
- Meeting legal and compliance requirements
In 2026, explainability is about accountability, safety, and trust.
AI Fundamentals vs Advanced AI (What to Learn Next)
AI fundamentals explain how systems work, while advanced AI focuses on building, governing, and optimizing them.
Not everyone needs to become an AI engineer. Learning paths should match goals, roles, and risk levels.
Learning progression explained
AI education works best when it moves from understanding → application → specialization. The following table comparison will help you understand the AI learning path:
| Level | Focus | What you Learn |
| Beginner AI | Understanding | What AI is, how models learn, common tasks, and limitations |
| Intermediate AI | Application | Using AI tools, Prompts, Workflows, automation, and evaluation |
| Advanced AI | Systems & Control | Model training, optimization, ethics, safety, deployment |
How to choose your level
- Learn AI fundamentals if you use AI tools or work with AI outputs
- Move to intermediate AI if you automate tasks or workflows
- Study advanced AI only if you build, deploy, or regulate AI systems
Following these will prevent the overlearning theory while missing practical competence.
Final Summary: Daily AI Tools Verdict
What experts at the Daily AI tool found out about AI Fundamental is that:
- What AI is
Artificial intelligence is a field of computer science that builds systems able to learn patterns, make decisions, and perform tasks that usually require human intelligence.
- How AI works
AI systems learn from data using algorithms and models. They detect patterns, adjust internal parameters, and generate outputs based on probability, not understanding or intent.
- What AI can do
AI can predict outcomes, classify information, understand language, recognize images, detect anomalies, and generate text, images, code, and recommendations at scale.
- What AI can’t do
AI cannot think, reason causally, understand truth, or take responsibility. It can produce confident errors, reflect bias, and fail without human oversight.
How to use this knowledge? Use AI as a support tool, not a decision-maker. Verify outputs, understand limits, and apply AI where speed and consistency matter.