Skip to main content
  1. Resources/
  2. Study Materials/
  3. Information Technology Engineering/
  4. IT Semester 4/
  5. Fundamentals of Machine Learning (4341603)/

Fundamentals of Machine Learning (4341603) - Winter 2024 Solution

·
Study-Material Solutions Machine-Learning 4341603 2024 Winter
Milav Dabgar
Author
Milav Dabgar
Experienced lecturer in the electrical and electronic manufacturing industry. Skilled in Embedded Systems, Image Processing, Data Science, MATLAB, Python, STM32. Strong education professional with a Master’s degree in Communication Systems Engineering from L.D. College of Engineering - Ahmedabad.
Table of Contents

Question 1(a) [3 marks]
#

Describe human learning in brief.

Answer:

Human learning is the process by which humans acquire knowledge, skills, and behaviors through experience, practice, and instruction.

Table: Human Learning Process

AspectDescription
ObservationGathering information from environment
ExperienceLearning through trial and error
PracticeRepetition to improve skills
MemoryStoring and retrieving information
  • Learning Types: Visual, auditory, kinesthetic learning styles
  • Feedback Loop: Humans learn from mistakes and successes
  • Adaptation: Ability to apply knowledge to new situations

Mnemonic: “OEPMA” - Observe, Experience, Practice, Memory, Adapt

Question 1(b) [4 marks]
#

Differentiate: Supervised Learning v/s Unsupervised Learning

Answer:

Comparison Table: Supervised vs Unsupervised Learning

ParameterSupervised LearningUnsupervised Learning
Training DataLabeled data (input-output pairs)Unlabeled data (only inputs)
GoalPredict output for new inputsFind hidden patterns
ExamplesClassification, RegressionClustering, Association
FeedbackDirect feedback availableNo direct feedback
  • Supervised: Teacher guides learning with correct answers
  • Unsupervised: Self-discovery of patterns without guidance

Mnemonic: “SL-Labels, UL-Unknown” patterns

Question 1(c) [7 marks]
#

List out machine learning activities. Explain each in detail.

Answer:

Table: Machine Learning Activities

ActivityPurposeDescription
Data CollectionGather raw dataCollecting relevant data from various sources
Data PreprocessingClean and prepare dataHandling missing values, normalization
Feature SelectionChoose important featuresSelecting relevant attributes for learning
Model TrainingBuild learning modelTraining algorithm on prepared dataset
Model EvaluationAssess performanceTesting model accuracy and effectiveness
Model DeploymentPut model to useImplementing model in real-world applications
flowchart TD
    A[Data Collection] --> B[Data Preprocessing]
    B --> C[Feature Selection]
    C --> D[Model Training]
    D --> E[Model Evaluation]
    E --> F[Model Deployment]
    F --> G[Model Monitoring]
  • Iterative Process: Activities repeat for model improvement
  • Quality Control: Each step ensures better model performance

Mnemonic: “CPFTEDM” - Collect, Preprocess, Feature, Train, Evaluate, Deploy, Monitor

Question 1(c OR) [7 marks]
#

Find mean, median, and mode for the following data: 1, 1, 1, 2, 4, 5, 5, 6, 6, 7, 7, 7, 7, 8, 9, 10, 11

Answer:

Data Analysis Table

StatisticFormulaCalculationResult
MeanSum/Count(1+1+1+2+4+5+5+6+6+7+7+7+7+8+9+10+11)/175.88
MedianMiddle value7th position in sorted data6
ModeMost frequentValue appearing 4 times7

Step-by-step calculation:

  • Count: 17 values
  • Sum: 100
  • Mean: 100/17 = 5.88
  • Median: Middle position (9th) = 6
  • Mode: 7 appears 4 times (highest frequency)

Mnemonic: “MMM” - Mean=Average, Median=Middle, Mode=Most frequent

Question 2(a) [3 marks]
#

Write down steps to use hold out method for model training.

Answer:

Hold Out Method Steps

StepActionPurpose
1Split dataset (70-80% training, 20-30% testing)Separate data for training and evaluation
2Train model on training setBuild learning algorithm
3Test model on testing setEvaluate model performance
  • Random Split: Ensure representative distribution in both sets
  • No Overlap: Testing data never used in training
  • Single Split: One-time division of data

Mnemonic: “STT” - Split, Train, Test

Question 2(b) [4 marks]
#

Explain structure of confusion matrix.

Answer:

Confusion Matrix Structure

Predicted PositivePredicted Negative
Actual PositiveTrue Positive (TP)False Negative (FN)
Actual NegativeFalse Positive (FP)True Negative (TN)

Components Explanation:

  • TP: Correctly predicted positive cases
  • TN: Correctly predicted negative cases
  • FP: Incorrectly predicted as positive (Type I error)
  • FN: Incorrectly predicted as negative (Type II error)

Performance Metrics:

  • Accuracy = (TP+TN)/(TP+TN+FP+FN)
  • Precision = TP/(TP+FP)

Mnemonic: “TPFN-FPTN” for matrix positions

Question 2(c) [7 marks]
#

Define data pre-processing. Explain various methods used in data pre-processing.

Answer:

Data pre-processing is the technique of preparing raw data by cleaning, transforming, and organizing it for machine learning algorithms.

Data Pre-processing Methods Table

MethodPurposeTechniques
Data CleaningRemove noise and inconsistenciesHandle missing values, remove duplicates
Data TransformationConvert data formatNormalization, standardization
Data ReductionReduce dataset sizeFeature selection, dimensionality reduction
Data IntegrationCombine multiple sourcesMerge datasets, resolve conflicts
flowchart LR
    A[Raw Data] --> B[Data Cleaning]
    B --> C[Data Transformation]
    C --> D[Data Reduction]
    D --> E[Clean Data]
  • Missing Values: Use mean, median, or mode for imputation
  • Outliers: Detect and handle extreme values
  • Feature Scaling: Normalize data to same scale

Mnemonic: “CTRI” - Clean, Transform, Reduce, Integrate

Question 2(a OR) [3 marks]
#

Explain histogram with suitable example.

Answer:

A histogram is a graphical representation showing the frequency distribution of numerical data by dividing it into bins.

Histogram Components Table

ComponentDescription
X-axisData ranges (bins)
Y-axisFrequency of occurrence
BarsHeight represents frequency

Example: Student marks distribution:

  • Bins: 0-20, 21-40, 41-60, 61-80, 81-100
  • Heights show number of students in each range

Mnemonic: “BAR” - Bins, Axes, Range

Question 2(b OR) [4 marks]
#

Relate the appropriate data type of following examples: i) Gender of a person ii) Rank of students iii) Price of a home iv) Color of a flower

Answer:

Data Types Classification Table

ExampleData TypeCharacteristics
Gender of personNominal CategoricalNo natural order (Male/Female)
Rank of studentsOrdinal CategoricalHas meaningful order (1st, 2nd, 3rd)
Price of homeContinuous NumericalCan take any value within range
Color of flowerNominal CategoricalNo natural order (Red, Blue, Yellow)
  • Categorical Data: Limited set of distinct categories
  • Numerical Data: Mathematical operations possible
  • Ordinal: Categories with meaningful sequence

Mnemonic: “NOCO” - Nominal, Ordinal, Continuous

Question 2(c OR) [7 marks]
#

Describe K-fold cross validation in details.

Answer:

K-fold cross validation is a model evaluation technique that divides dataset into K equal parts for robust performance assessment.

K-fold Process Table

StepActionPurpose
1Divide data into K equal foldsCreate K subsets
2Use K-1 folds for trainingTrain model
3Use 1 fold for testingEvaluate performance
4Repeat K timesEach fold serves as test set once
5Average all resultsGet final performance metric
flowchart TD
    A[Original Dataset] --> B[Divide into K folds]
    B --> C[Iteration 1: Train on folds 2-K, Test on fold 1]
    C --> D[Iteration 2: Train on folds 1,3-K, Test on fold 2]
    D --> E[... Continue for K iterations]
    E --> F[Average all K results]

Advantages:

  • Robust Evaluation: Every data point used for both training and testing
  • Reduced Overfitting: Multiple validation rounds
  • Better Generalization: More reliable performance estimate

Common Values: K=5 or K=10 typically used

Mnemonic: “DURAT” - Divide, Use, Repeat, Average, Test

Question 3(a) [3 marks]
#

List out applications of regression.

Answer:

Regression Applications Table

DomainApplicationPurpose
FinanceStock price predictionForecast market trends
HealthcareDrug dosage calculationDetermine optimal treatment
MarketingSales forecastingPredict revenue
Real EstateProperty valuationEstimate house prices
  • Predictive Modeling: Forecasting continuous values
  • Trend Analysis: Understanding relationships between variables
  • Risk Assessment: Evaluating future outcomes

Mnemonic: “FHMR” - Finance, Healthcare, Marketing, Real estate

Question 3(b) [4 marks]
#

Write a short note on single linear regression.

Answer:

Single linear regression models the relationship between one independent variable (X) and one dependent variable (Y) using a straight line.

Linear Regression Components

ComponentFormulaDescription
EquationY = a + bXLinear relationship
Slope (b)Change in Y / Change in XRate of change
Intercept (a)Y-value when X=0Starting point
ErrorActual - PredictedDifference from line
  • Goal: Find best-fit line minimizing errors
  • Method: Least squares optimization
  • Assumption: Linear relationship exists between variables

Mnemonic: “YABX” - Y equals a plus b times X

Question 3(c) [7 marks]
#

Write and discuss K-NN algorithm.

Answer:

K-Nearest Neighbors (K-NN) is a lazy learning algorithm that classifies data points based on the majority class of their K nearest neighbors.

K-NN Algorithm Steps

StepActionDescription
1Choose K valueSelect number of neighbors
2Calculate distancesFind distance to all training points
3Sort distancesArrange in ascending order
4Select K nearestChoose K closest points
5Majority votingAssign most common class
flowchart TD
    A[New Data Point] --> B[Calculate Distance to All Training Points]
    B --> C[Sort Distances]
    C --> D[Select K Nearest Neighbors]
    D --> E[Majority Vote]
    E --> F[Assign Class Label]

Distance Metrics:

  • Euclidean: Most common distance measure
  • Manhattan: Sum of absolute differences
  • Minkowski: Generalized distance metric

Advantages:

  • Simple: Easy to understand and implement
  • No Training: Stores all data, no model building

Disadvantages:

  • Computationally Expensive: Must check all points
  • Sensitive to K: Performance depends on K value

Mnemonic: “CCSM” - Choose, Calculate, Sort, Majority vote

Question 3(a OR) [3 marks]
#

Write any three examples of supervised learning in the field of healthcare

Answer:

Healthcare Supervised Learning Examples

ApplicationInputOutputPurpose
Disease DiagnosisSymptoms, test resultsDisease typeIdentify medical conditions
Drug Response PredictionPatient data, geneticsDrug effectivenessPersonalized medicine
Medical Image AnalysisX-rays, MRI scansTumor detectionEarly disease detection
  • Pattern Recognition: Learning from labeled medical data
  • Clinical Decision Support: Assisting doctors in diagnosis
  • Predictive Medicine: Forecasting health outcomes

Mnemonic: “DDM” - Diagnosis, Drug response, Medical imaging

Question 3(b OR) [4 marks]
#

Differentiate: Classification v/s Regression.

Answer:

Classification vs Regression Comparison

AspectClassificationRegression
Output TypeDiscrete categories/classesContinuous numerical values
GoalPredict class labelsPredict numerical values
ExamplesEmail spam/not spamHouse price prediction
EvaluationAccuracy, Precision, RecallMAE, MSE, R-squared
  • Classification: Predicts categories (Yes/No, Red/Blue/Green)
  • Regression: Predicts quantities (Price, Temperature, Weight)
  • Algorithms: Some work for both, others specialized

Mnemonic: “CLASS-Categories, REG-Real numbers”

Question 3(c OR) [7 marks]
#

Explain classification learning steps in details.

Answer:

Classification learning involves training a model to assign input data to predefined categories or classes.

Classification Learning Steps

StepProcessDescription
1Data CollectionGather labeled training examples
2Data PreprocessingClean and prepare data
3Feature SelectionChoose relevant attributes
4Model Selection**Choose classification algorithm
5TrainingLearn from labeled data
6EvaluationTest model performance
7DeploymentUse model for predictions
flowchart TD
    A[Labeled Training Data] --> B[Preprocessing]
    B --> C[Feature Selection]
    C --> D[Choose Algorithm]
    D --> E[Train Model]
    E --> F[Evaluate Performance]
    F --> G{Good Performance?}
    G -->|No| D
    G -->|Yes| H[Deploy Model]

Key Concepts:

  • Supervised Learning: Requires labeled training data
  • Feature Engineering: Transform raw data into useful features
  • Cross-validation: Ensure model generalizes well
  • Performance Metrics: Accuracy, precision, recall, F1-score

Common Algorithms:

  • Decision Trees: Easy to interpret rules
  • SVM: Effective for high-dimensional data
  • Neural Networks: Handle complex patterns

Mnemonic: “DCFMTED” - Data, Clean, Features, Model, Train, Evaluate, Deploy

Question 4(a) [3 marks]
#

Differentiate: Clustering v/s Classification.

Answer:

Clustering vs Classification Comparison

AspectClusteringClassification
Learning TypeUnsupervisedSupervised
Training DataUnlabeled dataLabeled data
GoalFind hidden groupsPredict known classes
OutputGroup assignmentsClass predictions
  • Clustering: Discovers unknown patterns in data
  • Classification: Learns from known examples to predict new ones
  • Evaluation: Clustering harder to evaluate than classification

Mnemonic: “CL-Unknown groups, CLASS-Known categories”

Question 4(b) [4 marks]
#

List out advantages and disadvantages of apriori algorithm.

Answer:

Apriori Algorithm Pros and Cons

AdvantagesDisadvantages
Easy to understandComputationally expensive
Finds all frequent itemsetsMultiple database scans
Well-established algorithmLarge memory requirements
Generates association rulesPoor scalability

Advantages Details:

  • Simplicity: Straightforward logic and implementation
  • Completeness: Finds all frequent patterns
  • Rule Generation: Creates meaningful association rules

Disadvantages Details:

  • Performance: Slow on large datasets
  • Memory: Stores many candidate itemsets
  • Scalability: Performance degrades with data size

Mnemonic: “EASY-SLOW” - Easy to use but slow performance

Question 4(c) [7 marks]
#

Write and explain applications of unsupervised learning.

Answer:

Unsupervised learning discovers hidden patterns in data without labeled examples.

Unsupervised Learning Applications

DomainApplicationTechniquePurpose
MarketingCustomer segmentationClusteringGroup similar customers
RetailMarket basket analysisAssociation rulesFind buying patterns
Anomaly DetectionFraud detectionOutlier detectionIdentify unusual behavior
Data CompressionDimensionality reductionPCAReduce data size
RecommendationContent filteringClusteringSuggest similar items
mindmap
  root((Unsupervised Learning))
    Clustering
      Customer Segmentation
      Image Segmentation
      Gene Sequencing
    Association Rules
      Market Basket Analysis
      Web Usage Mining
      Protein Sequences
    Anomaly Detection
      Fraud Detection
      Network Security
      Quality Control
    Dimensionality Reduction
      Data Visualization
      Feature Extraction
      Data Compression

Key Benefits:

  • Pattern Discovery: Reveals hidden structures
  • No Labels Required: Works with raw data
  • Exploratory Analysis: Understand data characteristics

Common Techniques:

  • K-means: Partition data into clusters
  • Hierarchical Clustering: Create cluster hierarchies
  • Apriori: Find association rules

Mnemonic: “MRAD” - Marketing, Retail, Anomaly, Dimensionality

Question 4(a OR) [3 marks]
#

List out applications of apriori algorithm.

Answer:

Apriori Algorithm Applications

DomainApplicationPurpose
RetailMarket basket analysisFind items bought together
Web MiningWebsite usage patternsDiscover page visit sequences
BioinformaticsGene pattern analysisIdentify gene associations
  • Association Rules: “If A then B” relationships
  • Frequent Patterns: Items appearing together often
  • Cross-selling: Recommend related products

Mnemonic: “RWB” - Retail, Web, Bioinformatics

Question 4(b OR) [4 marks]
#

Define: Support and Confidence.

Answer:

Association Rule Metrics

MetricFormulaDescriptionRange
SupportSupport(A) = Count(A) / Total transactionsHow often itemset appears0 to 1
ConfidenceConfidence(A→B) = Support(A∪B) / Support(A)How often rule is true0 to 1

Support Example:

  • If itemset {Bread, Milk} appears in 3 out of 10 transactions
  • Support = 3/10 = 0.3 (30%)

Confidence Example:

  • Rule: “Bread → Milk”
  • If {Bread, Milk} appears 3 times, Bread alone appears 5 times
  • Confidence = 3/5 = 0.6 (60%)

Mnemonic: “SUP-How often, CONF-How reliable”

Question 4(c OR) [7 marks]
#

Write and explain K-means clustering approach in detail.

Answer:

K-means clustering partitions data into K clusters by minimizing within-cluster sum of squares.

K-means Algorithm Steps

StepActionDescription
1Choose KSelect number of clusters
2Initialize centroidsPlace K points randomly
3Assign pointsEach point to nearest centroid
4Update centroidsCalculate mean of assigned points
5Repeat 3-4Until convergence
flowchart TD
    A[Choose K value] --> B[Initialize K centroids randomly]
    B --> C[Assign each point to nearest centroid]
    C --> D[Update centroids to cluster means]
    D --> E{Centroids changed?}
    E -->|Yes| C
    E -->|No| F[Final clusters]

Algorithm Details:

  • Distance Metric: Usually Euclidean distance
  • Convergence: When centroids stop moving significantly
  • Objective: Minimize within-cluster sum of squares (WCSS)

Advantages:

  • Simple: Easy to understand and implement
  • Efficient: Linear time complexity
  • Scalable: Works well with large datasets

Disadvantages:

  • K Selection: Must choose K beforehand
  • Sensitive to Initialization: Different starting points give different results
  • Assumes Spherical Clusters: May not work with irregular shapes

Choosing K:

  • Elbow Method: Plot WCSS vs K, look for “elbow”
  • Silhouette Analysis: Measure cluster quality

Mnemonic: “CIAUR” - Choose K, Initialize, Assign, Update, Repeat

Question 5(a) [3 marks]
#

Give the difference between predictive model and descriptive model.

Answer:

Predictive vs Descriptive Models

AspectPredictive ModelDescriptive Model
PurposeForecast future outcomesExplain current patterns
OutputPredictions/classificationsInsights/summaries
ExamplesSales forecasting, spam detectionCustomer segmentation, trend analysis
  • Predictive: Uses historical data to predict future
  • Descriptive: Analyzes existing data to understand patterns
  • Goal: Prediction vs Understanding

Mnemonic: “PRED-Future, DESC-Present”

Question 5(b) [4 marks]
#

List out application of scikit-learn.

Answer:

Scikit-learn Applications

CategoryApplicationsAlgorithms
ClassificationEmail filtering, image recognitionSVM, Random Forest, Naive Bayes
RegressionPrice prediction, risk assessmentLinear Regression, Decision Trees
ClusteringCustomer segmentation, data explorationK-means, DBSCAN
PreprocessingData cleaning, feature scalingStandardScaler, LabelEncoder
  • Machine Learning Library: Comprehensive Python toolkit
  • Easy Integration: Works with NumPy, Pandas
  • Well-documented: Extensive examples and tutorials

Mnemonic: “CRCP” - Classification, Regression, Clustering, Preprocessing

Question 5(c) [7 marks]
#

Explain features and applications of Numpy.

Answer:

NumPy (Numerical Python) is the fundamental library for scientific computing in Python, providing support for large multi-dimensional arrays and mathematical functions.

NumPy Features Table

FeatureDescriptionBenefit
N-dimensional ArraysPowerful array objectsEfficient data storage
BroadcastingOperations on different shaped arraysFlexible computations
Mathematical FunctionsTrigonometric, logarithmic, statisticalComplete math toolkit
PerformanceImplemented in C/FortranFast execution
Memory EfficiencyContiguous memory layoutReduced memory usage

NumPy Applications

DomainApplicationPurpose
Machine LearningData preprocessing, feature engineeringHandle numerical data
Image ProcessingImage manipulation, filteringProcess pixel arrays
Scientific ComputingNumerical simulations, modelingMathematical computations
Financial AnalysisPortfolio optimization, risk modelingQuantitative analysis
mindmap
  root((NumPy))
    Core Features
      N-dimensional Arrays
      Broadcasting
      Mathematical Functions
      Fast Performance
    Applications
      Machine Learning
      Image Processing
      Scientific Computing
      Financial Analysis
    Benefits
      Memory Efficient
      Easy to Use
      Integrates Well
      Industry Standard

Key Capabilities:

  • Array Operations: Element-wise operations, slicing, indexing
  • Linear Algebra: Matrix operations, eigenvalues, decompositions
  • Random Number Generation: Statistical distributions, sampling
  • Fourier Transforms: Signal processing, frequency analysis

Integration:

  • Pandas: DataFrames built on NumPy arrays
  • Matplotlib: Plotting NumPy arrays
  • Scikit-learn: ML algorithms use NumPy arrays

Mnemonic: “NFAMS” - N-dimensional, Fast, Arrays, Math, Scientific

Question 5(a OR) [3 marks]
#

Write a short note on bagging

Answer:

Bagging (Bootstrap Aggregating) is an ensemble method that improves model performance by training multiple models on different subsets of data.

Bagging Process Table

StepProcessPurpose
Bootstrap SamplingCreate multiple training setsGenerate diverse datasets
Train ModelsBuild model on each subsetCreate multiple predictors
Aggregate ResultsCombine predictions (voting/averaging)Reduce overfitting
  • Variance Reduction: Reduces model variance through averaging
  • Parallel Training: Models trained independently
  • Example: Random Forest uses bagging with decision trees

Mnemonic: “BTA” - Bootstrap, Train, Aggregate

Question 5(b OR) [4 marks]
#

List out features of Pandas.

Answer:

Pandas Features

FeatureDescriptionBenefit
DataFrame/SeriesStructured data containersEasy data manipulation
File I/ORead/write CSV, Excel, JSONHandle various formats
Data CleaningHandle missing values, duplicatesPrepare clean data
Grouping/AggregationGroup by operations, statisticsAnalyze data patterns

Data Operations:

  • Indexing: Flexible data selection and filtering
  • Merging: Combine datasets with joins
  • Reshaping: Pivot tables and data transformation

Mnemonic: “DFIG” - DataFrame, File I/O, Indexing, Grouping

Question 5(c OR) [7 marks]
#

Explain features and applications of Matplotlib.

Answer:

Matplotlib is a comprehensive 2D plotting library for Python that produces publication-quality figures in various formats and interactive environments.

Matplotlib Features

FeatureDescriptionCapability
Plot TypesLine, bar, scatter, histogram, pieDiverse visualization options
CustomizationColors, fonts, styles, layoutsProfessional appearance
Interactive FeaturesZoom, pan, widgetsDynamic exploration
Multiple BackendsGUI, web, file outputFlexible deployment
3D PlottingSurface, wireframe, scatter plotsThree-dimensional visualization

Matplotlib Applications

DomainApplicationVisualization Type
Data ScienceExploratory data analysisHistograms, scatter plots
Scientific ResearchPublication figuresLine plots, error bars
Business IntelligenceDashboard creationBar charts, trend lines
Machine LearningModel performance visualizationConfusion matrices, ROC curves
EngineeringSignal analysisTime series, frequency plots
flowchart LR
    A[Raw Data] --> B[Matplotlib Processing]
    B --> C[Static Plots]
    B --> D[Interactive Plots]
    B --> E[Publication Figures]
    C --> F[PNG/PDF Output]
    D --> G[Web Applications]
    E --> H[Research Papers]

Key Components:

  • Figure: Top-level container for all plot elements
  • Axes: Individual plots within a figure
  • Artist: Everything drawn on figure (lines, text, etc.)
  • Backend: Handles rendering to different outputs

Plot Customization:

  • Colors/Styles: Wide range of visual options
  • Annotations: Text labels, arrows, legends
  • Subplots: Multiple plots in single figure
  • Layouts: Grid arrangements, spacing control

Integration Benefits:

  • NumPy Arrays: Direct plotting of numerical data
  • Pandas: Built-in plotting methods
  • Jupyter Notebooks: Inline plot display
  • Web Frameworks: Embed plots in applications

Output Formats:

  • Raster: PNG, JPEG for web use
  • Vector: PDF, SVG for publications
  • Interactive: HTML for web deployment

Mnemonic: “MVICS” - Multiple plots, Visualization, Interactive, Customizable, Scientific

Related

Fundamentals of Machine Learning (4341603) - Winter 2023 Solution
Study-Material Solutions Machine-Learning 4341603 2023 Winter
Fundamentals of Machine Learning (4341603) - Summer 2024 Solution
Study-Material Solutions Machine-Learning 4341603 2024 Summer
Computer Networking (4343202) - Winter 2024 Solution
26 mins
Study-Material Solutions Computer-Networking 4343202 2024 Winter
Advanced Java Programming (4351603) - Winter 2024 Solution
Study-Material Solutions Advanced-Java 4351603 2024 Winter
Cyber Security (4353204) - Winter 2024 Short Solution
10 mins
Study-Material Solutions Cyber-Security 4353204 2024 Winter
Cyber Security (4353204) - Winter 2024 Solution
14 mins
Study-Material Solutions Cyber-Security 4353204 2024 Winter