Skip to main content
  1. Resources/
  2. Study Materials/
  3. Information Technology Engineering/
  4. IT Semester 5/
  5. Artificial Intelligence & Machine Learning (4351601)/

Foundation of AI and ML (4351601) - Winter 2023 Solution

·
Study-Material Solutions Ai-Ml 4351601 2023 Winter
Milav Dabgar
Author
Milav Dabgar
Experienced lecturer in the electrical and electronic manufacturing industry. Skilled in Embedded Systems, Image Processing, Data Science, MATLAB, Python, STM32. Strong education professional with a Master’s degree in Communication Systems Engineering from L.D. College of Engineering - Ahmedabad.
Table of Contents

Question 1(a) [3 marks]
#

Define the following terms: (1) Artificial Intelligence (2) Expert System.

Answer:

TermDefinition
Artificial IntelligenceAI is a branch of computer science that creates machines capable of performing tasks that typically require human intelligence, such as learning, reasoning, and problem-solving.
Expert SystemAn expert system is a computer program that uses knowledge and inference rules to solve problems that normally require human expertise in a specific domain.
  • AI characteristics: Learning, reasoning, perception
  • Expert system components: Knowledge base, inference engine

Mnemonic: “AI Learns, Expert Advises”

Question 1(b) [4 marks]
#

Compare Biological Neural Network and Artificial Neural Network.

Answer:

AspectBiological Neural NetworkArtificial Neural Network
ProcessingParallel processingSequential/parallel processing
SpeedSlow (milliseconds)Fast (nanoseconds)
LearningContinuous learningBatch/online learning
StorageDistributed storageCentralized storage
  • Biological: Complex, fault-tolerant, self-repairing
  • Artificial: Simple, precise, programmable

Mnemonic: “Bio is Complex, AI is Simple”

Question 1(c) [7 marks]
#

Explain types of AI with its applications.

Answer:

Type of AIDescriptionApplications
Narrow AIAI designed for specific tasksVoice assistants, recommendation systems
General AIAI with human-level intelligenceNot yet achieved
Super AIAI exceeding human intelligenceTheoretical concept
graph TD
    A[Types of AI] --> B[Narrow AI]
    A --> C[General AI]
    A --> D[Super AI]
    B --> E[Siri, Alexa]
    B --> F[Netflix Recommendations]
    C --> G[Human-level Tasks]
    D --> H[Beyond Human Intelligence]
  • Current focus: Narrow AI dominates today’s applications
  • Future goal: Achieving General AI safely

Mnemonic: “Narrow Now, General Goal, Super Scary”

Question 1(c) OR [7 marks]
#

Explain AI ethics and limitations.

Answer:

Ethics AspectDescription
PrivacyProtecting personal data and user information
BiasEnsuring fairness across different groups
TransparencyMaking AI decisions explainable
AccountabilityDetermining responsibility for AI actions

Limitations:

  • Data dependency: Requires large, quality datasets
  • Computational power: Needs significant processing resources
  • Lack of creativity: Cannot truly create original concepts

Mnemonic: “Privacy, Bias, Transparency, Accountability”

Question 2(a) [3 marks]
#

Define the following terms: (1) Well posed Learning Problem (2) Machine Learning.

Answer:

TermDefinition
Well posed Learning ProblemA learning problem with clearly defined task (T), performance measure (P), and experience (E) where performance improves with experience.
Machine LearningA subset of AI that enables computers to learn and improve automatically from experience without being explicitly programmed.
  • Well posed formula: T + P + E = Learning
  • ML advantage: Automatic improvement from data

Mnemonic: “Task, Performance, Experience”

Question 2(b) [4 marks]
#

Explain Reinforcement Learning along with terms used in it.

Answer:

TermDescription
AgentThe learner or decision maker
EnvironmentThe world in which agent operates
ActionWhat agent can do in each state
StateCurrent situation of the agent
RewardFeedback from environment
graph LR
    A[Agent] --> B[Action]
    B --> C[Environment]
    C --> D[State]
    C --> E[Reward]
    D --> A
    E --> A
  • Learning process: Trial and error approach
  • Goal: Maximize cumulative reward

Mnemonic: “Agent Acts, Environment States and Rewards”

Question 2(c) [7 marks]
#

Compare Supervised, Unsupervised and Reinforcement Learning.

Answer:

AspectSupervisedUnsupervisedReinforcement
DataLabeled dataUnlabeled dataInteractive data
GoalPredict outputFind patternsMaximize reward
FeedbackImmediateNoneDelayed
ExamplesClassificationClusteringGame playing
  • Supervised: Teacher-guided learning
  • Unsupervised: Self-discovery learning
  • Reinforcement: Trial-and-error learning

Mnemonic: “Supervised has Teacher, Unsupervised Discovers, Reinforcement Tries”

Question 2(a) OR [3 marks]
#

Write Key features of Reinforcement Learning.

Answer:

FeatureDescription
Trial and ErrorLearning through experimentation
Delayed RewardFeedback comes after actions
Sequential DecisionActions affect future states
  • No supervisor: Agent learns independently
  • Exploration vs Exploitation: Balance between trying new actions and using known good actions

Mnemonic: “Try, Delay, Sequence”

Question 2(b) OR [4 marks]
#

Explain Types of Reinforcement learning.

Answer:

TypeDescription
Positive RLAdding positive stimulus to increase behavior
Negative RLRemoving negative stimulus to increase behavior

Based on Learning:

  • Model-based: Agent learns environment model
  • Model-free: Agent learns directly from experience

Mnemonic: “Positive Adds, Negative Removes”

Question 2(c) OR [7 marks]
#

Explain approaches to implement Reinforcement Learning.

Answer:

ApproachDescriptionExample
Value-basedLearn value of states/actionsQ-Learning
Policy-basedLearn policy directlyPolicy Gradient
Model-basedLearn environment modelDynamic Programming
graph TD
    A[RL Approaches] --> B[Value-based]
    A --> C[Policy-based]
    A --> D[Model-based]
    B --> E[Q-Learning]
    C --> F[Policy Gradient]
    D --> G[Dynamic Programming]
  • Value-based: Estimates value functions
  • Policy-based: Optimizes policy parameters
  • Model-based: Uses environment model

Mnemonic: “Value, Policy, Model”

Question 3(a) [3 marks]
#

Describe the activation functions ReLU and sigmoid.

Answer:

FunctionFormulaRange
ReLUf(x) = max(0, x)[0, ∞)
Sigmoidf(x) = 1/(1 + e^(-x))(0, 1)
  • ReLU advantage: No vanishing gradient problem
  • Sigmoid advantage: Smooth gradient, probabilistic output

Mnemonic: “ReLU Rectifies, Sigmoid Squashes”

Question 3(b) [4 marks]
#

Explain Multi-layer feed forward ANN.

Answer:

ComponentDescription
Input LayerReceives input data
Hidden LayersProcess information (multiple layers)
Output LayerProduces final result
ConnectionsForward direction only
  • Information flow: Unidirectional from input to output
  • No cycles: No feedback connections

Mnemonic: “Input → Hidden → Output (Forward Only)”

Question 3(c) [7 marks]
#

Draw the structure of ANN and explain functionality of each of its components.

Answer:

graph LR
    A[Input Layer] --> B[Hidden Layer 1]
    B --> C[Hidden Layer 2]
    C --> D[Output Layer]
    
    subgraph "Components"
        E[Neurons]
        F[Weights]
        G[Bias]
        H[Activation Function]
    end
ComponentFunctionality
NeuronsProcessing units that receive inputs and produce outputs
WeightsConnection strengths between neurons
BiasAdditional parameter to shift activation function
Activation FunctionIntroduces non-linearity to the network
  • Input layer: Receives and distributes input data
  • Hidden layers: Extract features and patterns
  • Output layer: Produces final classification or prediction
  • Connections: Weighted links between neurons

Mnemonic: “Neurons with Weights, Bias, and Activation”

Question 3(a) OR [3 marks]
#

Write a short note on Backpropagation.

Answer:

AspectDescription
PurposeTraining algorithm for neural networks
MethodGradient descent with chain rule
DirectionBackward error propagation
  • Process: Calculate error gradients backwards through network
  • Update: Adjust weights to minimize error

Mnemonic: “Back-ward Error Propagation”

Question 3(b) OR [4 marks]
#

Explain Single-layer feed forward network.

Answer:

FeatureDescription
StructureInput layer directly connected to output layer
LayersOnly input and output layers
LimitationsCan only solve linearly separable problems
ExamplePerceptron
  • Capability: Limited to linear decision boundaries
  • Applications: Simple classification tasks

Mnemonic: “Single Layer, Linear Limits”

Question 3(c) OR [7 marks]
#

Draw and explain the architecture of Recurrent neural network.

Answer:

graph LR
    A[Input] --> B[Hidden State]
    B --> C[Output]
    B --> B[Self-loop]
    D[Previous State] --> B
ComponentFunction
Hidden StateMaintains memory of previous inputs
Recurrent ConnectionFeedback from hidden state to itself
Sequence ProcessingHandles sequential data
  • Memory: Retains information from previous time steps
  • Applications: Language modeling, speech recognition
  • Advantage: Can process variable-length sequences

Mnemonic: “Recurrent Remembers, Loops Back”

Question 4(a) [3 marks]
#

Define NLP and write down advantages of it.

Answer:

TermDefinition
NLPNatural Language Processing - enables computers to understand, interpret, and generate human language

Advantages:

  • Human-computer interaction: Natural communication
  • Automation: Automated text processing and analysis
  • Accessibility: Voice interfaces for disabled users

Mnemonic: “Natural Language, Natural Interaction”

Question 4(b) [4 marks]
#

Compare NLU and NLG.

Answer:

AspectNLU (Understanding)NLG (Generation)
PurposeInterpret human languageGenerate human language
InputText/SpeechStructured data
OutputStructured dataText/Speech
ExamplesSentiment analysisText summarization
  • NLU: Converts unstructured text to structured data
  • NLG: Converts structured data to natural text

Mnemonic: “NLU Understands, NLG Generates”

Question 4(c) [7 marks]
#

Explain word tokenization and frequency distribution of words with suitable example.

Answer:

ProcessDescriptionExample
TokenizationBreaking text into individual words/tokens“Hello world” → [“Hello”, “world”]
Frequency DistributionCounting occurrence of each token{“Hello”: 1, “world”: 1}

Example:

Text: "The cat sat on the mat"
Tokens: ["The", "cat", "sat", "on", "the", "mat"]
Frequency: {"The": 1, "cat": 1, "sat": 1, "on": 1, "the": 1, "mat": 1}
  • Case sensitivity: “The” and “the” counted separately
  • Applications: Text analysis, search engines
  • Preprocessing: Essential step for NLP tasks

Mnemonic: “Tokenize then Count”

Question 4(a) OR [3 marks]
#

List disadvantages of NLP.

Answer:

DisadvantageDescription
AmbiguityMultiple meanings of words/sentences
Context dependencyMeaning changes with context
Language complexityGrammar rules and exceptions
  • Cultural variations: Different languages, dialects
  • Computational cost: Resource-intensive processing

Mnemonic: “Ambiguous, Contextual, Complex”

Question 4(b) OR [4 marks]
#

Explain types of ambiguities in NLP.

Answer:

TypeDescriptionExample
LexicalWord has multiple meanings“Bank” (financial/river)
SyntacticMultiple parse trees possible“I saw a man with a telescope”
SemanticMultiple interpretations“Flying planes can be dangerous”
  • Resolution: Context analysis, statistical models
  • Challenge: Major hurdle in NLP systems

Mnemonic: “Lexical words, Syntactic structure, Semantic meaning”

Question 4(c) OR [7 marks]
#

Explain stemming words and parts of speech(POS) tagging with suitable example.

Answer:

ProcessDescriptionExample
StemmingReducing words to root/stem form“running” → “run”, “flies” → “fli”
POS TaggingAssigning grammatical categories“The/DT cat/NN runs/VB fast/RB”

Stemming Example:

Original: ["running", "runs", "runner"]
Stemmed: ["run", "run", "runner"]

POS Tagging Example:

Sentence: "The quick brown fox jumps"
Tagged: "The/DT quick/JJ brown/JJ fox/NN jumps/VB"
  • Stemming purpose: Reduce vocabulary size, group related words
  • POS purpose: Understand grammatical structure
  • Applications: Information retrieval, grammar checking

Mnemonic: “Stem to Root, Tag by Grammar”

Question 5(a) [3 marks]
#

Define the term word embedding and list various word embedding techniques.

Answer:

TermDefinition
Word EmbeddingDense vector representations of words that capture semantic relationships

Techniques:

  • TF-IDF: Term Frequency-Inverse Document Frequency
  • Bag of Words (BoW): Simple word occurrence counting
  • Word2Vec: Neural network-based embeddings

Mnemonic: “TF-IDF counts, BoW bags, Word2Vec vectorizes”

Question 5(b) [4 marks]
#

Explain about Challenges with TF-IDF and BoW.

Answer:

MethodChallenges
TF-IDFSparse vectors, no semantic similarity, high dimensionality
BoWOrder ignored, context lost, sparse representation

Common Issues:

  • Sparsity: Most vector elements are zero
  • No semantics: Similar words have different vectors
  • High dimensions: Memory and computation intensive

Mnemonic: “Sparse, No Semantics, High Dimensions”

Question 5(c) [7 marks]
#

Explain applications of NLP with suitable examples.

Answer:

ApplicationDescriptionExample
Machine TranslationTranslate between languagesGoogle Translate
Sentiment AnalysisDetermine emotional toneProduct review analysis
Question AnsweringAnswer questions from textChatbots, virtual assistants
Spam DetectionIdentify unwanted emailsEmail filters
Spelling CorrectionFix spelling errorsAuto-correct in text editors
graph TD
    A[NLP Applications] --> B[Machine Translation]
    A --> C[Sentiment Analysis]
    A --> D[Question Answering]
    A --> E[Spam Detection]
    A --> F[Spelling Correction]
  • Real-world impact: Improves human-computer interaction
  • Business value: Automates text processing tasks
  • Growing field: New applications emerging constantly

Mnemonic: “Translate, Sentiment, Question, Spam, Spell”

Question 5(a) OR [3 marks]
#

Describe the Glove(Global Vector for word representation).

Answer:

AspectDescription
PurposeCreate word vectors using global corpus statistics
MethodCombines global matrix factorization and local context
AdvantageCaptures both global and local statistical information
  • Global statistics: Uses word co-occurrence information
  • Pre-trained: Available trained vectors for common use

Mnemonic: “Global Vectors, Local Context”

Question 5(b) OR [4 marks]
#

Explain the Inverse Document Frequency (IDF).

Answer:

ComponentFormulaPurpose
IDFlog(N/df)Measure word importance across documents
NTotal documentsCorpus size
dfDocument frequencyDocuments containing the term
  • High IDF: Rare words (more informative)
  • Low IDF: Common words (less informative)
  • Application: Part of TF-IDF weighting scheme

Mnemonic: “Inverse Document, Rare is Important”

Question 5(c) OR [7 marks]
#

Explain calculation of TF(Term Frequency) for a document with suitable example.

Answer:

MethodFormulaDescription
Raw TFf(t,d)Simple count of term in document
Normalized TFf(t,d)/max(f(w,d))Normalized by maximum frequency
Log TF1 + log(f(t,d))Logarithmic scaling

Example Document: “The cat sat on the mat. The mat was soft.”

TermCountRaw TFNormalized TFLog TF
“the”331.01.48
“cat”110.331.0
“mat”220.671.30

Calculation Steps:

  1. Count each term occurrence
  2. Apply chosen TF formula
  3. Use in TF-IDF calculation

Mnemonic: “Count, Normalize, Log”

Related

Embedded System & Microcontroller Application (4351102) - Winter 2023 Solution
Study-Material Solutions Embedded-System 4351102 2023 Winter
Entrepreneurship and Start-ups (4300021) - Winter 2023 Solution
Study-Material Solutions Entrepreneurship 4300021 2023 Winter
Electronic Circuits & Applications (4321103) - Winter 2023 Solution
16 mins
Study-Material Solutions Electronics 4321103 2023 Winter
Principles of Electronic Communication (4331104) - Winter 2023 Solution
Study-Material Solutions Electronic-Communication 4331104 2023 Winter
Fundamentals of Electrical Engineering (4311101) - Winter 2023 Solution
12 mins
Study-Material Solutions Electrical-Engineering 4311101 2023 Winter
Digital Electronics (4321102) - Winter 2023 Solution
15 mins
Study-Material Solutions Digital-Electronics 4321102 2023 Winter