Skip to main content
  1. Resources/
  2. Study Materials/
  3. Information Technology Engineering/
  4. IT Semester 5/
  5. Artificial Intelligence & Machine Learning (4351601)/

Foundation of AI and ML (4351601) - Winter 2024 Solution

·
Study-Material Solutions Ai-Ml 4351601 2024 Winter
Milav Dabgar
Author
Milav Dabgar
Experienced lecturer in the electrical and electronic manufacturing industry. Skilled in Embedded Systems, Image Processing, Data Science, MATLAB, Python, STM32. Strong education professional with a Master’s degree in Communication Systems Engineering from L.D. College of Engineering - Ahmedabad.
Table of Contents

Question 1(a) [3 marks]
#

Define following terms: 1) Fuzzy Logic. 2) Expert System.

Answer:

TermDefinition
Fuzzy LogicComputing approach that deals with approximate rather than fixed and exact reasoning, allowing degrees of truth between 0 and 1
Expert SystemAI program that mimics human expert decision-making by using knowledge base and inference engine to solve specific problems
  • Key Features: Both handle uncertainty and incomplete information
  • Applications: Medical diagnosis, industrial control systems

Mnemonic: “Fuzzy Experts handle uncertain decisions”


Question 1(b) [4 marks]
#

Define following terms: 1) Machine Learning. 2) Reinforcement Learning.

Answer:

TermDefinitionKey Characteristic
Machine LearningAlgorithm that automatically improves performance through experience without explicit programmingLearning from data patterns
Reinforcement LearningAgent learns optimal actions through trial-and-error interactions with environment using rewards/penaltiesLearning through feedback

Diagram:

graph LR
    A[Data] --> B[ML Algorithm] --> C[Model] --> D[Predictions]
    E[Environment] --> F[RL Agent] --> G[Actions] --> E
    E --> H[Rewards] --> F

Mnemonic: “ML learns from Data, RL learns from Rewards”


Question 1(c) [7 marks]
#

Explain types of Artificial Intelligence in detail with suitable diagram.

Answer:

Table: Types of AI

TypeDescriptionCapabilityExamples
Narrow AIDesigned for specific tasksLimited domain expertiseSiri, Chess programs
General AIHuman-level intelligence across domainsMulti-domain reasoningCurrently theoretical
Super AIExceeds human intelligenceBeyond human capabilitiesFuture concept

Diagram:

graph TD
    A[Artificial Intelligence] --> B[Narrow AI<br/>Weak AI]
    A --> C[General AI<br/>Strong AI]
    A --> D[Super AI]
    
    B --> E[Task-Specific<br/>Current Reality]
    C --> F[Human-Level<br/>Future Goal]
    D --> G[Beyond Human<br/>Theoretical]
  • Current Status: We are in Narrow AI era
  • Development Path: Narrow → General → Super AI
  • Timeline: General AI expected in 20-30 years

Mnemonic: “Narrow Now, General Goal, Super Soon”


Question 1(c) OR [7 marks]
#

Explain various aspects related to ethics while designing an AI system. Also, explain limitations of AI system in detail.

Answer:

AI Ethics Table:

Ethical AspectDescriptionImplementation
FairnessAvoid bias and discriminationDiverse training data
TransparencyExplainable AI decisionsClear algorithms
PrivacyProtect user dataData encryption
AccountabilityResponsibility for AI actionsHuman oversight

AI Limitations:

  • Data Dependency: Requires large, quality datasets
  • Lack of Common Sense: Cannot understand context like humans
  • Brittleness: Fails in unexpected situations
  • Black Box Problem: Difficult to explain decisions

Mnemonic: “Fair, Transparent, Private, Accountable AI has Data, Common sense, Brittleness, Black box issues”


Question 2(a) [3 marks]
#

List characteristics of reinforcement learning.

Answer:

CharacteristicDescription
Trial-and-ErrorAgent learns through experimentation
Reward-BasedFeedback through rewards/penalties
Sequential Decision MakingActions affect future states
Exploration vs ExploitationBalance between trying new actions and using known good actions

Mnemonic: “Trial Rewards Sequential Exploration”


Question 2(b) [4 marks]
#

Explain positive reinforcement and negative reinforcement.

Answer:

Comparison Table:

TypeDefinitionEffectExample
Positive ReinforcementAdding pleasant stimulus to increase behaviorIncreases desired behaviorGiving treat for good performance
Negative ReinforcementRemoving unpleasant stimulus to increase behaviorIncreases desired behaviorStopping alarm when task completed

Key Difference: Both increase behavior, but positive adds reward while negative removes punishment.

Mnemonic: “Positive Adds pleasure, Negative Removes pain”


Question 2(c) [7 marks]
#

Explain Supervised learning in detail.

Answer:

Definition: Learning algorithm that learns from labeled training data to make predictions on new data.

Process Table:

StepDescriptionExample
TrainingAlgorithm learns from input-output pairsEmail → Spam/Not Spam
ValidationTest model on unseen dataCheck accuracy
PredictionMake outputs for new inputsClassify new emails

Types:

  • Classification: Predicts categories (spam detection)
  • Regression: Predicts continuous values (house prices)

Diagram:

graph LR
    A[Training Data<br/>X,Y pairs] --> B[Learning Algorithm] --> C[Model]
    D[New Input X] --> C --> E[Prediction Y]

Mnemonic: “Supervised = Teacher provides correct answers”


Question 2(a) OR [3 marks]
#

List key components involved in human learning.

Answer:

ComponentFunction
ObservationGathering information from environment
MemoryStoring and retrieving experiences
PracticeRepetition to improve skills
FeedbackInformation about performance

Mnemonic: “Observe, Memorize, Practice, Feedback”


Question 2(b) OR [4 marks]
#

Explain about well-posed learning problem in detail.

Answer:

Definition: A learning problem with clearly defined task, performance measure, and experience.

Components Table:

ComponentDescriptionExample
Task (T)What the system should learn to doPlay chess
Performance (P)How to measure successWin percentage
Experience (E)Training data or practicePrevious games

Formula: Learning = Improving P on T through E

Criteria: Problem must be measurable, achievable, and have available data.

Mnemonic: “Task Performance Experience = TPE for learning”


Question 2(c) OR [7 marks]
#

Explain Unsupervised learning in detail.

Answer:

Definition: Learning patterns from data without labeled examples or target outputs.

Types Table:

TypePurposeAlgorithmExample
ClusteringGroup similar dataK-meansCustomer segmentation
AssociationFind relationshipsAprioriMarket basket analysis
Dimensionality ReductionReduce featuresPCAData visualization

Diagram:

graph TD
    A[Unlabeled Data] --> B[Unsupervised Algorithm]
    B --> C[Clustering]
    B --> D[Association Rules]
    B --> E[Dimensionality Reduction]
  • No Teacher: Algorithm finds hidden patterns independently
  • Exploratory: Discovers unknown structures in data

Mnemonic: “Unsupervised = No teacher, find patterns yourself”


Question 3(a) [3 marks]
#

Explain SIGMOID function. Also, draw its graph and provide an example of SIGMOID function.

Answer:

Definition: Activation function that maps any real number to value between 0 and 1.

Formula: σ(x) = 1/(1 + e^(-x))

Graph (ASCII):

100.-5505

Example: For x = 0, σ(0) = 1/(1 + e^0) = 1/2 = 0.5

Properties: S-shaped curve, smooth gradient, used in binary classification

Mnemonic: “Sigmoid Squashes values between 0 and 1”


Question 3(b) [4 marks]
#

Define following term: 1) Activation function. 2) Artificial neural network.

Answer:

TermDefinitionPurpose
Activation FunctionMathematical function that determines neuron output based on weighted inputsIntroduces non-linearity to neural networks
Artificial Neural NetworkComputing system inspired by biological neural networks with interconnected nodesPattern recognition and machine learning

Key Features:

  • Non-linear processing enables complex pattern learning
  • Layered architecture processes information hierarchically

Mnemonic: “Activation Artificially mimics brain neurons”


Question 3(c) [7 marks]
#

Draw and explain architecture of Recurrent network in detail.

Answer:

Definition: Neural network with connections that create loops, allowing information persistence.

Architecture Diagram:

graph LR
    A[Input x_t] --> B[Hidden State h_t]
    B --> C[Output y_t]
    B --> D[Hidden State h_t+1]
    E[Previous State h_t-1] --> B
    
    subgraph "Time Steps"
    F[t-1] --> G[t] --> H[t+1]
    end

Components Table:

ComponentFunctionFormula
Hidden StateMemory of previous inputsh_t = f(W_h * h_t-1 + W_x * x_t)
Input LayerCurrent time step inputx_t
Output LayerPrediction at time ty_t = W_y * h_t

Applications: Speech recognition, language translation, time series prediction

Advantage: Handles sequential data with memory of past information

Mnemonic: “Recurrent = Remembers previous states”


Question 3(a) OR [3 marks]
#

Explain TANH function. Also, draw its graph and provide an example of TANH function.

Answer:

Definition: Hyperbolic tangent activation function that maps values between -1 and 1.

Formula: tanh(x) = (e^x - e^(-x))/(e^x + e^(-x))

Graph (ASCII):

-101-303

Example: For x = 0, tanh(0) = (1-1)/(1+1) = 0

Properties: Zero-centered, S-shaped, stronger gradients than sigmoid

Mnemonic: “TANH = Two-sided sigmoid (-1 to +1)”


Question 3(b) OR [4 marks]
#

Define following term: 1) Biological neural network. 2) Loss calculation.

Answer:

TermDefinitionKey Aspects
Biological Neural NetworkNetwork of interconnected neurons in living organisms that process informationDendrites, cell body, axon, synapses
Loss calculationMathematical measure of difference between predicted and actual outputsGuides learning through backpropagation

Biological Structure: Neurons → Synapses → Neural Networks → Brain Loss Types: Mean Squared Error, Cross-entropy, Absolute Error

Mnemonic: “Biology inspires AI, Loss measures learning progress”


Question 3(c) OR [7 marks]
#

Draw and explain architecture of multi-layer feed-forward network in detail.

Answer:

Definition: Neural network with multiple layers where information flows forward from input to output.

Architecture Diagram:

graph LR
    subgraph "Input Layer"
    A1[x1] 
    A2[x2]
    A3[x3]
    end
    
    subgraph "Hidden Layer 1"
    B1[h1]
    B2[h2]
    B3[h3]
    end
    
    subgraph "Hidden Layer 2"
    C1[h4]
    C2[h5]
    end
    
    subgraph "Output Layer"
    D1[y1]
    D2[y2]
    end
    
    A1 --> B1
    A1 --> B2
    A2 --> B1
    A2 --> B3
    A3 --> B2
    A3 --> B3
    
    B1 --> C1
    B2 --> C1
    B2 --> C2
    B3 --> C2
    
    C1 --> D1
    C1 --> D2
    C2 --> D1
    C2 --> D2

Layer Functions Table:

LayerFunctionProcessing
InputReceives dataNo processing, just distribution
HiddenFeature extractionWeighted sum + activation function
OutputFinal predictionClassification or regression output

Information Flow: Input → Hidden Layer(s) → Output (unidirectional) Learning: Backpropagation adjusts weights based on error

Mnemonic: “Multi-layer = Multiple hidden layers for complex learning”


Question 4(a) [3 marks]
#

List advantages of NLP in detail.

Answer:

AdvantageDescription
AutomationAutomates text processing tasks that require human effort
Language UnderstandingProcesses multiple languages and dialects effectively
24/7 AvailabilityWorks continuously without human intervention
ScalabilityHandles large volumes of text data efficiently

Applications: Chatbots, translation, sentiment analysis, document processing

Mnemonic: “NLP = Automates Language Understanding 24/7 at Scale”


Question 4(b) [4 marks]
#

Explain Natural Language Generation in detail.

Answer:

Definition: AI process that converts structured data into natural human language text.

Process Table:

StepDescriptionFunction
Content PlanningDecide what information to includeData selection
Sentence PlanningStructure sentences and paragraphsText organization
Surface RealizationGenerate actual text with grammarFinal output

Applications: Report generation, chatbots, automated journalism, personalized content

Example: Converting sales data → “Sales increased 15% this quarter due to strong performance in electronics.”

Mnemonic: “NLG = Numbers to Narrative”


Question 4(c) [7 marks]
#

Explain types of ambiguities in NLP. Also, provide examples of each ambiguity.

Answer:

Ambiguity Types Table:

TypeDescriptionExampleResolution
LexicalWord has multiple meanings“Bank” (river/financial)Context analysis
SyntacticSentence structure unclear“I saw man with telescope”Parse trees
SemanticMeaning unclear“Colorless green ideas”Semantic rules
PragmaticContext-dependent meaning“Can you pass salt?” (request/question)Situational context

Diagram:

graph TD
    A[NLP Ambiguities] --> B[Lexical<br/>Word Level]
    A --> C[Syntactic<br/>Grammar Level]
    A --> D[Semantic<br/>Meaning Level]
    A --> E[Pragmatic<br/>Context Level]

Resolution Strategies: Context analysis, statistical models, knowledge bases

Mnemonic: “Lexical Syntactic Semantic Pragmatic = LSSP ambiguities”


Question 4(a) OR [3 marks]
#

List disadvantages of NLP in detail.

Answer:

DisadvantageDescription
Context LimitationsStruggles with sarcasm, humor, cultural references
Language ComplexityDifficulty with idioms, slang, regional dialects
Data RequirementsNeeds large amounts of training data
Computational CostRequires significant processing power and memory

Challenges: Ambiguity, multilingual support, real-time processing

Mnemonic: “NLP Challenges = Context, Language, Data, Computation”


Question 4(b) OR [4 marks]
#

Explain Natural Language Understanding in detail.

Answer:

Definition: AI capability to comprehend and interpret human language meaning and intent.

Components Table:

ComponentFunctionExample
TokenizationBreak text into words/phrases“Hello world” → [“Hello”, “world”]
ParsingAnalyze grammatical structureIdentify subject, verb, object
Semantic AnalysisExtract meaningUnderstand relationships between words
Intent RecognitionIdentify user purpose“Book flight” → travel booking intent

Process Flow: Text Input → Tokenization → Parsing → Semantic Analysis → Intent Understanding

Applications: Virtual assistants, chatbots, voice commands

Mnemonic: “NLU = Naturally Understands Language”


Question 4(c) OR [7 marks]
#

Explain stemming and lemmatization in detail. Also provide two examples of each.

Answer:

Definitions:

ProcessDescriptionMethodOutput
StemmingReduces words to root form by removing suffixesRule-based choppingWord stem
LemmatizationReduces words to dictionary base formDictionary lookupValid word

Stemming Examples:

  1. “running”, “runs”, “ran” → “run”
  2. “fishing”, “fished”, “fisher” → “fish”

Lemmatization Examples:

  1. “better” → “good” (comparative to base)
  2. “children” → “child” (plural to singular)

Comparison Table:

AspectStemmingLemmatization
SpeedFasterSlower
AccuracyLowerHigher
OutputMay not be valid wordAlways valid word

Mnemonic: “Stemming = Speed, Lemmatization = Language accuracy”


Question 5(a) [3 marks]
#

Define: 1) Word embeddings. 2) Machine Translation.

Answer:

TermDefinitionPurpose
Word EmbeddingsDense vector representations of words that capture semantic relationshipsConvert text to numerical form for ML
Machine TranslationAutomated conversion of text from one language to anotherEnable cross-language communication

Key Features:

  • Word embeddings preserve word relationships in vector space
  • Machine translation maintains meaning across languages

Mnemonic: “Words become Vectors, Languages become Translations”


Question 5(b) [4 marks]
#

Explain Word2Vec in detail.

Answer:

Definition: Neural network technique that creates word embeddings by learning word associations from large text corpus.

Architecture Types:

ModelDescriptionPrediction
CBOW (Continuous Bag of Words)Predicts target word from contextContext → Target
Skip-gramPredicts context words from targetTarget → Context

Process:

  1. Training: Neural network learns word relationships
  2. Vector Creation: Each word gets unique vector representation
  3. Similarity: Similar words have similar vectors

Example: vector(“king”) - vector(“man”) + vector(“woman”) ≈ vector(“queen”)

Mnemonic: “Word2Vec = Words to Vectors through Context”


Question 5(c) [7 marks]
#

Consider that a manufacturer of a product has collected feedback from the customer and is now willing to apply sentiment analysis on it. What are the steps to be followed for the same? Explain in detail.

Answer:

Sentiment Analysis Pipeline:

StepDescriptionTools/Methods
Data CollectionGather customer feedbackSurveys, reviews, social media
Data PreprocessingClean and prepare textRemove noise, tokenization
Feature ExtractionConvert text to numericalTF-IDF, Word embeddings
Model TrainingTrain sentiment classifierNaive Bayes, SVM, Neural networks
PredictionClassify sentimentPositive/Negative/Neutral
AnalysisInterpret resultsGenerate insights and reports

Implementation Flow:

graph LR
    A[Customer Feedback] --> B[Text Preprocessing]
    B --> C[Feature Extraction]
    C --> D[Sentiment Model]
    D --> E[Classification]
    E --> F[Business Insights]

Preprocessing Steps:

  • Remove special characters and URLs
  • Convert to lowercase for consistency
  • Remove stop words (the, and, or)
  • Handle negations (not good → negative sentiment)

Model Evaluation: Use metrics like accuracy, precision, recall, F1-score

Business Value: Understand customer satisfaction, improve products, identify issues

Mnemonic: “Collect, Clean, Extract, Train, Predict, Analyze = Ccetpa”


Question 5(a) OR [3 marks]
#

List out advantages of GloVe with respect to NLP.

Answer:

AdvantageDescription
Global ContextConsiders entire corpus statistics, not just local context
Linear RelationshipsCaptures semantic relationships through vector arithmetic
EfficiencyFaster training compared to Word2Vec on large datasets
StabilityConsistent results across multiple training runs

Key Benefits: Better performance on word analogy tasks, captures both local and global statistics

Mnemonic: “GloVe = Global Vector Excellence”


Question 5(b) OR [4 marks]
#

Explain challenges with TFIDF and BoW.

Answer:

Challenges Table:

MethodChallengesImpact
TF-IDF1. Ignores word order
2. Sparse vectors
3. No semantic similarity
Limited context understanding
BoW1. Loses sequence information
2. High dimensionality
3. No word relationships
Poor semantic representation

Common Issues:

  • Vocabulary size: Creates very large, sparse matrices
  • Out-of-vocabulary: Cannot handle new words
  • Semantic gap: “Good” and “excellent” treated as different

Solutions: Use word embeddings (Word2Vec, GloVe) for dense, semantic representations

Mnemonic: “TF-IDF and BoW = Sparse, No order, No semantics”


Question 5(c) OR [7 marks]
#

Consider that an e-mail service provider is willing to apply a SPAM detection technique. What are the steps to be followed to detect a SPAM e-mail? Explain in detail.

Answer:

SPAM Detection Pipeline:

StepDescriptionTechniques
Data CollectionGather labeled spam/ham emailsEmail datasets, user reports
Feature EngineeringExtract relevant featuresSubject analysis, sender patterns
Text PreprocessingClean email contentRemove HTML, normalize text
Feature ExtractionConvert to numerical formTF-IDF, N-grams, metadata
Model TrainingTrain classifierNaive Bayes, SVM, Random Forest
ValidationTest model performanceCross-validation, test set
DeploymentIntegrate with email systemReal-time classification

Feature Types:

Feature CategoryExamplesPurpose
Content-basedKeywords, phrases, HTML tagsAnalyze email body
Header-basedSender, subject, timestampsCheck metadata
BehavioralSending patterns, frequencyIdentify suspicious behavior

Implementation Diagram:

graph TD
    A[Incoming Email] --> B[Feature Extraction]
    B --> C{SPAM Classifier}
    C -->|Spam| D[Spam Folder]
    C -->|Ham| E[Inbox]
    F[User Feedback] --> G[Model Update]

Model Evaluation Metrics:

  • Precision: Avoid false positives (legitimate emails marked as spam)
  • Recall: Catch actual spam emails
  • F1-Score: Balance between precision and recall

Continuous Learning: Update model with new spam patterns and user feedback

Mnemonic: “Collect, Engineer, Process, Extract, Train, Validate, Deploy = CEPTVD”

Related

Foundation of AI and ML (4351601) - Summer 2024 Solution
Study-Material Solutions Ai-Ml 4351601 2024 Summer
Foundation of AI and ML (4351601) - Winter 2023 Solution
Study-Material Solutions Ai-Ml 4351601 2023 Winter
Cloud and Data Center Technologies (4361602) - Winter 2024 Solution
Study-Material Solutions Cloud-Computing 4361602 2024 Winter
Embedded System & Microcontroller Application (4351102) - Winter 2024 Solution
Study-Material Solutions Embedded-System 4351102 2024 Winter
VLSI (4361102) - Winter 2024 Solution
Study-Material Solutions Vlsi 4361102 2024 Winter
Wireless Sensor Networks and IoT (4353201) - Winter 2024 Solution
21 mins
Study-Material Solutions Wireless-Sensor-Networks Iot 4353201 2024 Winter