Friday, 20 March 2026

Python Coding Challenge - Question with Answer (ID -200326)

 


Explanation:

๐Ÿ”น 1. List Creation
nums = [None, 0, False, 1, 2]

A list named nums is created.

It contains different types of values:

None → represents no value

0 → integer zero

False → boolean false

1, 2 → positive integers

๐Ÿ‘‰ In Python:

None, 0, False are falsy values

1, 2 are truthy values

๐Ÿ”น 2. Using filter() Function
res = list(filter(bool, nums))

filter(function, iterable) applies a function to each item.

Only elements where the function returns True are kept.

๐Ÿ‘‰ Here:

bool is used as the function.

Each element is checked like this:

bool(value)

๐Ÿ”น 3. Filtering Process (Step-by-Step)
Element bool(value) Result
None  False     ❌ Removed
0         False     ❌ Removed
False False     ❌ Removed
1         True             ✅ Kept
2         True             ✅ Kept

๐Ÿ‘‰ After filtering:

[1, 2]

๐Ÿ”น 4. Converting to List

filter() returns a filter object (iterator).

list() converts it into a proper list.

๐Ÿ”น 5. Printing Output
print(res)

Displays the final filtered list.

✅ Final Output
[1, 2]

BOOK: 100 Python Challenges to Think Like a Developer

AI Mathematics — Deep Intelligence Systems Neural Networks, Attention, and Scaling: Understanding the Mathematical Architecture of Modern Artificial ... Intelligence from First Principles Book 4)

 


Introduction

Artificial intelligence has experienced rapid progress in recent years, especially with the rise of deep learning systems capable of performing tasks such as language translation, image recognition, and autonomous decision-making. Behind these intelligent systems lies a strong mathematical foundation that explains how models learn from data, optimize predictions, and scale to massive datasets.

The book AI Mathematics — Deep Intelligence Systems: Neural Networks, Attention, and Scaling explores the mathematical principles that power modern AI technologies. It focuses on understanding AI systems from first principles, explaining how neural networks, attention mechanisms, and large-scale architectures are built and optimized mathematically.

By connecting mathematical theory with modern AI architectures, the book helps readers understand the deeper structure behind intelligent systems.


Why Mathematics Is Essential for Artificial Intelligence

Mathematics forms the backbone of artificial intelligence and machine learning. Concepts from linear algebra, probability theory, optimization, and statistics allow researchers to model complex systems and train neural networks effectively.

Mathematical tools are used to:

  • Represent data and features in high-dimensional spaces

  • Optimize neural network parameters during training

  • Understand model behavior and performance

  • Design algorithms capable of learning from large datasets

Researchers note that mathematics provides the analytical framework needed to understand neural network architectures and improve AI algorithms.

Without these mathematical foundations, modern AI systems would not be able to function effectively.


Neural Networks: The Mathematical Core of AI

Neural networks are the fundamental building blocks of deep learning systems. Inspired by biological neurons, these networks consist of interconnected layers that transform input data into meaningful outputs.

From a mathematical perspective, neural networks operate through:

  • Matrix operations that represent connections between neurons

  • Activation functions that introduce non-linear behavior

  • Gradient-based optimization methods used to adjust parameters

Training a neural network involves minimizing a loss function using algorithms such as gradient descent. This process allows the model to learn patterns and improve predictions over time.

These mathematical principles allow neural networks to perform tasks ranging from image classification to speech recognition.


The Attention Mechanism in Modern AI

One of the most important innovations in modern AI systems is the attention mechanism. Attention allows neural networks to focus on the most relevant parts of input data when making predictions.

Instead of treating all information equally, attention assigns different weights to different parts of the input sequence. This enables the model to emphasize the most important information.

For example, in natural language processing, not every word in a sentence contributes equally to meaning. Attention mechanisms dynamically determine which words are most relevant during prediction.

Mathematically, attention uses matrices called queries, keys, and values to calculate weighted relationships between input elements, forming the core of modern transformer models.

This architecture powers many advanced AI systems, including large language models.


Scaling Laws and Large AI Models

Another major topic explored in the book is scaling, which refers to increasing the size of models, datasets, and computational resources to improve AI performance.

Modern deep learning systems often contain billions of parameters and are trained on massive datasets. Mathematical analysis helps researchers understand how model performance improves as systems scale.

Scaling involves several factors:

  • Increasing neural network depth and width

  • Expanding training datasets

  • Using more powerful computing resources

Understanding these scaling principles helps engineers design AI systems that are both efficient and capable of handling complex tasks.


Mathematical Optimization in Deep Learning

Optimization plays a crucial role in training deep learning models. During training, algorithms adjust model parameters to minimize prediction errors.

Common optimization techniques include:

  • Gradient descent

  • Stochastic gradient descent (SGD)

  • Adaptive optimization algorithms

These mathematical methods guide the learning process and allow neural networks to gradually improve performance.

Without optimization algorithms, neural networks would not be able to adapt to training data or learn useful representations.


Applications of Mathematical AI Systems

The mathematical principles described in the book are applied in many real-world AI technologies.

Examples include:

  • Natural language processing systems used in chatbots and translation tools

  • Computer vision models for image and video analysis

  • Recommendation systems used by online platforms

  • Scientific computing and research simulations

These applications demonstrate how mathematical AI models can analyze complex data and support decision-making across industries.


Who Should Read This Book

This book is particularly valuable for readers who want to understand the technical foundations of modern AI systems.

It is suitable for:

  • Students studying artificial intelligence or data science

  • Researchers exploring deep learning theory

  • Engineers developing advanced AI models

  • Mathematicians interested in the computational aspects of machine learning

Readers with some background in mathematics or programming will gain the most benefit from its detailed explanations.


Hard Copy: AI Mathematics — Deep Intelligence Systems Neural Networks, Attention, and Scaling: Understanding the Mathematical Architecture of Modern Artificial ... Intelligence from First Principles Book 4)

Kindle: AI Mathematics — Deep Intelligence Systems Neural Networks, Attention, and Scaling: Understanding the Mathematical Architecture of Modern Artificial ... Intelligence from First Principles Book 4)

Conclusion

AI Mathematics — Deep Intelligence Systems: Neural Networks, Attention, and Scaling offers an in-depth exploration of the mathematical architecture behind modern artificial intelligence. By explaining neural networks, attention mechanisms, and scaling principles from first principles, the book reveals how advanced AI systems are constructed and optimized.

As artificial intelligence continues to evolve, understanding its mathematical foundations becomes increasingly important. For anyone interested in the theory behind deep learning and intelligent systems, this book provides valuable insights into the science that powers the future of AI.

Machine Learning Intuition: Uncovering the simple ideas behind the science of prediction

 


Introduction

Machine learning has become one of the most important technologies in the modern digital world. From recommendation systems and fraud detection to medical diagnosis and language translation, machine learning models are used to make predictions from data. However, many learners find machine learning difficult to understand because it is often taught through complex mathematics and technical formulas.

The book Machine Learning Intuition: Uncovering the Simple Ideas Behind the Science of Prediction focuses on explaining machine learning in a more accessible way. Instead of relying heavily on advanced mathematics, the book emphasizes clear explanations, visual intuition, and simple examples to help readers understand how machine learning systems actually work.

Its goal is to help readers develop a deep conceptual understanding of predictive models and the logic behind modern machine learning techniques.


Understanding the Core Idea of Machine Learning

At its heart, machine learning is about learning patterns from data in order to make predictions about new data. Algorithms analyze past examples and use the discovered relationships to estimate future outcomes.

The book explains this fundamental idea in simple terms: how algorithms learn from examples, why they make certain predictions, and how different components of the machine learning workflow fit together.

Rather than focusing only on formulas, it helps readers build intuition about what is happening inside the models.


Learning the Language of AI and Machine Learning

For beginners, the first challenge in understanding machine learning is often the terminology. Words such as AI, machine learning, models, features, and datasets can feel overwhelming at first.

The book begins by explaining these basic concepts clearly. It introduces the fundamental vocabulary used in AI and helps readers understand how these ideas relate to each other.

By building this foundation, readers gain confidence in navigating more advanced topics.


Understanding Machine Learning Models Intuitively

One of the key strengths of the book is its focus on intuitive explanations of common machine learning algorithms. Instead of diving directly into equations, it explains how models work conceptually.

Examples of algorithms explained include:

  • k-Nearest Neighbors (KNN) – predicting outcomes based on similarity to past examples

  • Decision Trees – models that split decisions into a sequence of logical rules

  • Regression models – predicting continuous values based on relationships in data

Understanding these models conceptually helps readers grasp why machine learning systems behave the way they do.


The Machine Learning Workflow

Building a machine learning system involves several steps beyond simply training a model. The book explains the entire machine learning workflow, which includes:

  1. Collecting and preparing data

  2. Preprocessing and feature engineering

  3. Training machine learning models

  4. Evaluating predictions

  5. Improving model performance

By understanding this process, readers see how different parts of a machine learning project fit together and contribute to the final predictive system.


Evaluating Model Performance

Another important topic covered in the book is how to evaluate whether a machine learning model is performing well. Machine learning models must be tested carefully to ensure that they can generalize to new data.

The book explains evaluation techniques for both classification and regression tasks, helping readers understand how to measure accuracy, detect overfitting, and compare models.

This practical perspective is essential for developing reliable machine learning systems.


Why Intuition Matters in Machine Learning

Many machine learning resources emphasize mathematical derivations and complex formulas. While these are important for advanced research, they can sometimes hide the fundamental ideas behind machine learning.

By focusing on intuition, the book helps readers:

  • Understand why algorithms work

  • Build mental models of prediction systems

  • Learn machine learning concepts more quickly

  • Apply techniques to real-world problems

Developing intuition allows learners to think critically about models rather than simply applying algorithms blindly.


Who Should Read This Book

Machine Learning Intuition is particularly useful for:

  • Beginners who want to understand machine learning concepts

  • Students studying data science or artificial intelligence

  • Professionals transitioning into machine learning careers

  • Developers who want a conceptual overview before studying advanced mathematics

Because the book emphasizes clarity and intuition, it is suitable for readers with limited background in mathematics or statistics.


Hard Copy: Machine Learning Intuition: Uncovering the simple ideas behind the science of prediction

Kindle: Machine Learning Intuition: Uncovering the simple ideas behind the science of prediction

Conclusion

Machine Learning Intuition: Uncovering the Simple Ideas Behind the Science of Prediction offers a refreshing approach to learning machine learning. By focusing on conceptual understanding instead of heavy mathematics, it helps readers grasp the fundamental ideas that power modern predictive systems.

As machine learning continues to influence industries and everyday technologies, building strong intuition about how these models work becomes increasingly valuable. This book serves as an excellent guide for anyone who wants to understand the science of prediction and develop a deeper appreciation for the principles behind machine learning.

The AI Engineering Bible: The Complete and Up-to-Date Guide to Build, Deploy and Scale Production Ready AI Systems

 


Introduction

Artificial intelligence is rapidly transforming industries, but building a successful AI system involves much more than training machine learning models. Real-world AI solutions require robust infrastructure, data pipelines, scalable architectures, and continuous monitoring. Many AI projects fail not because of poor algorithms but because they lack proper engineering practices and system design.

The book The AI Engineering Bible: The Complete and Up-to-Date Guide to Build, Deploy and Scale Production-Ready AI Systems provides a comprehensive guide to developing AI applications that work reliably in real environments. Written by Thomas R. Caldwell, the book focuses on the full lifecycle of AI engineering—from problem definition to deployment and long-term maintenance.

Unlike many AI books that concentrate only on theory, this guide emphasizes practical engineering strategies for building scalable, production-ready AI systems.


The Rise of AI Engineering

AI engineering is a discipline that combines machine learning, software engineering, and data infrastructure to create intelligent applications that operate reliably in production environments.

While machine learning research focuses on building models, AI engineering focuses on turning those models into real-world systems that can scale, perform efficiently, and integrate with existing software platforms.

This shift reflects the growing demand for professionals who can manage the entire AI pipeline, including data preparation, model training, deployment, monitoring, and maintenance.


Designing AI Systems from the Ground Up

One of the central themes of the book is structured system design. Before developing any AI model, engineers must clearly define the problem and understand the environment in which the system will operate.

Key design considerations include:

  • Identifying the business problem AI will solve

  • Defining system requirements and success metrics

  • Designing data collection and storage strategies

  • Addressing ethical and compliance concerns

Proper planning ensures that AI systems align with business objectives and operate responsibly.


Building Reliable Data Pipelines

Data is the foundation of every AI system. The book explains how to design data pipelines that collect, preprocess, and manage datasets efficiently.

Important elements of data pipelines include:

  • Data ingestion and storage systems

  • Data preprocessing and cleaning workflows

  • Feature engineering and dataset versioning

  • Integration with machine learning training pipelines

Reliable data pipelines ensure that models receive consistent and high-quality data, which improves prediction accuracy and system reliability.


Training and Managing Machine Learning Models

Once the data pipeline is established, engineers can focus on developing machine learning models. The book explains how to design training workflows and evaluate models effectively.

Topics related to model development include:

  • Model selection and algorithm design

  • Training loops and evaluation metrics

  • Hyperparameter optimization

  • Experiment tracking and version control

These practices help engineers maintain reproducibility and continuously improve model performance.


Deploying AI Systems in Production

One of the biggest challenges in AI development is moving models from experimentation to production environments. The book provides practical guidance for deploying AI models into real applications.

Deployment strategies discussed include:

  • Containerization using technologies such as Docker

  • API-based model serving

  • Cloud-based AI infrastructure

  • Continuous integration and deployment pipelines

These methods allow AI systems to deliver predictions at scale while maintaining reliability and performance.


Scaling AI Systems

As AI applications grow, they must handle larger datasets, more users, and increasing computational demands. The book explores strategies for scaling AI systems efficiently.

Key scaling techniques include:

  • Distributed model inference

  • Load balancing and traffic management

  • Efficient memory and computational resource management

  • Cloud infrastructure scaling

Scaling ensures that AI systems remain responsive even as usage increases.


Monitoring and Maintaining AI Models

Deploying a model is not the end of the AI lifecycle. Real-world environments constantly change, which means models must be monitored and updated regularly.

Important maintenance practices include:

  • Detecting model drift when data distributions change

  • Retraining models with new datasets

  • Monitoring system performance and reliability

  • Implementing feedback loops for continuous improvement

These practices help ensure that AI systems remain accurate and effective over time.


Who Should Read This Book

The AI Engineering Bible is particularly valuable for professionals involved in building and managing AI systems.

It is suitable for:

  • AI engineers and machine learning engineers

  • Software developers transitioning into AI roles

  • Data scientists interested in production AI systems

  • Technical leaders managing AI initiatives

The book provides both strategic guidance and technical insights for building scalable AI infrastructure.


Hard Copy: The AI Engineering Bible: The Complete and Up-to-Date Guide to Build, Deploy and Scale Production Ready AI Systems

Kindle: The AI Engineering Bible: The Complete and Up-to-Date Guide to Build, Deploy and Scale Production Ready AI Systems

Conclusion

The AI Engineering Bible highlights an essential truth about modern artificial intelligence: building successful AI systems requires strong engineering foundations. By covering every stage of the AI lifecycle—from system design and data pipelines to deployment and scaling—the book provides a practical roadmap for developing production-ready AI applications.

As AI technologies continue to evolve, the ability to engineer robust, scalable systems will become increasingly important. For developers and organizations aiming to turn machine learning models into real-world solutions, this book offers a valuable guide to mastering the discipline of AI engineering.

Experimental Design for Data Science and Engineering (Chapman & Hall/CRC Texts in Statistical Science)

 




Introduction

Modern data science and engineering rely heavily on experiments to understand systems, evaluate models, and improve decision-making. Whether optimizing manufacturing processes, testing machine learning models, or conducting scientific research, well-planned experiments are essential for extracting reliable insights from data.

The book Experimental Design for Data Science and Engineering by V. Roshan Joseph provides a comprehensive introduction to the statistical methods used to design efficient experiments. It explains how carefully structured experiments can reduce costs, improve accuracy, and accelerate discovery in data-driven environments.

The book connects classical statistical theory with modern data science challenges, making it valuable for researchers, engineers, and data scientists.


The Role of Experimental Design in Data Science

Experimental design is a statistical framework used to plan experiments so that meaningful conclusions can be drawn from collected data. Instead of testing variables randomly or inefficiently, researchers use structured methods to control factors and measure outcomes systematically.

In scientific and engineering contexts, theory, experiments, computation, and data are considered the four pillars of discovery. Experimental design helps link these elements by determining how experiments should be conducted to reveal the most information about a system.

A well-designed experiment allows researchers to:

  • Identify cause-and-effect relationships

  • Evaluate the impact of multiple variables simultaneously

  • Reduce the number of required experimental trials

  • Improve the reliability of statistical conclusions


Foundations of the Design of Experiments

The design of experiments (DOE) is a statistical discipline that studies how to structure experiments so that variables can be tested efficiently and objectively. In controlled experiments, researchers manipulate independent variables and observe their effects on outcomes.

Classic experimental design methods include:

Randomization

Randomization helps eliminate bias by randomly assigning treatments or conditions to experimental units.

Replication

Replication involves repeating experiments to ensure that results are reliable and not due to random chance.

Blocking

Blocking groups similar experimental units together to reduce variability caused by external factors.

These principles ensure that conclusions drawn from experiments are statistically valid.


Factorial Experiments and Multiple Variables

In many real-world problems, outcomes depend on multiple variables interacting with each other. Factorial experiments are designed to study these interactions efficiently.

A factorial design tests every possible combination of different factor levels, allowing researchers to measure both individual effects and interactions between variables.

For example, in a manufacturing experiment, factors such as temperature, pressure, and material composition might all influence product quality. A factorial experiment helps determine how these factors interact and which combination produces the best results.


Optimal Experimental Design

Modern data science often deals with large datasets and complex systems. In these situations, running too many experiments can be expensive or impractical. This is where optimal experimental design becomes important.

Optimal design methods aim to obtain the most informative results with the smallest number of experiments. These designs minimize statistical uncertainty and reduce the cost of experimentation while maintaining accuracy.

In engineering and machine learning, optimal design techniques are commonly used for:

  • Parameter estimation in statistical models

  • Process optimization in industrial systems

  • Hyperparameter tuning in machine learning models


Applications in Data Science and Engineering

Experimental design techniques are widely used across many domains.

Machine Learning and AI

Experiments help evaluate model performance, tune hyperparameters, and compare algorithms.

Manufacturing and Engineering

Engineers use experimental design to optimize production processes and improve product quality.

Scientific Research

Researchers use controlled experiments to test hypotheses and discover new scientific insights.

Business and Marketing

Companies use experiments such as A/B testing to evaluate marketing strategies and customer behavior.

These applications demonstrate how experimental design supports evidence-based decision-making.


Integrating Experimental Design with Modern Data Science

As data science continues to evolve, experimental design methods are increasingly combined with computational tools and machine learning techniques. Modern approaches use algorithms to plan experiments dynamically, analyze large datasets, and suggest the most informative experiments to run next.

This integration allows data scientists to move beyond simple trial-and-error approaches and instead rely on statistically guided experimentation.


Who Should Read This Book

Experimental Design for Data Science and Engineering is particularly useful for:

  • Data scientists working with complex datasets

  • Engineers optimizing systems or processes

  • Researchers conducting scientific experiments

  • Graduate students studying statistics or machine learning

The book provides both theoretical foundations and practical insights, making it a valuable resource for professionals who want to apply experimental design methods in real-world scenarios.


Hard Copy: Experimental Design for Data Science and Engineering (Chapman & Hall/CRC Texts in Statistical Science)

Kindle: Experimental Design for Data Science and Engineering (Chapman & Hall/CRC Texts in Statistical Science)

Conclusion

Experimental Design for Data Science and Engineering highlights the importance of structured experimentation in modern data-driven fields. By combining statistical theory with practical applications, the book demonstrates how well-designed experiments can uncover meaningful insights while minimizing cost and effort.

As organizations increasingly rely on data to guide decisions, understanding experimental design becomes essential for ensuring that conclusions are accurate, reproducible, and scientifically sound. For data scientists and engineers alike, mastering experimental design is a key step toward building reliable and impactful data-driven solutions.

Thursday, 19 March 2026

Python Coding challenge - Day 1094| What is the output of the following Python Code?

 





Code Explanation:

1️⃣ Importing the Module
import threading

Explanation

Imports Python’s threading module.

Used to create and manage threads.

2️⃣ Defining the Function
def task(i):

Explanation

A function named task is defined.

It takes one argument i.

Each thread will print a value.

3️⃣ Printing the Value
print(i)

Explanation

Prints the value passed to the function.

Output depends on the value of i.

4️⃣ Loop Execution
for i in range(3):

Explanation

Loop runs 3 times.

Values of i:

0, 1, 2

5️⃣ Creating a Thread
t = threading.Thread(target=task, args=(i,))

Explanation

A new thread is created in each iteration.

target=task → thread will execute task(i).

args=(i,) → passes current value of i as argument.

⚠️ (i,) is a tuple (correct syntax).


6️⃣ Starting the Thread
t.start()

Explanation

Starts the thread.

The thread executes task(i) and prints the value.

7️⃣ Joining the Thread (⚠️ Important)
t.join()

Explanation

Main thread waits for the current thread to finish before continuing.

This is inside the loop → so threads run one by one (not parallel).

๐Ÿ”„ Execution Flow

Iteration 1:

Thread runs → prints 0

Main thread waits

Iteration 2:

Thread runs → prints 1

Main thread waits

Iteration 3:

Thread runs → prints 2

Main thread waits

๐Ÿ“ค Final Output
0
1
2

Python Coding challenge - Day 1093| What is the output of the following Python Code?

 


Code Explanation:

1️⃣ Importing the Module
import threading

Explanation

Imports the threading module.

This module allows Python to run multiple threads concurrently.

2️⃣ Defining the Function
def task():

Explanation

A function named task is defined.

This function will be executed by each thread.

3️⃣ Function Body
print("X")

Explanation

When a thread runs this function, it prints:

X

4️⃣ Loop for Creating Threads
for _ in range(3):

Explanation

Loop runs 3 times.

_ is just a placeholder variable (value not used).

So, 3 threads will be created.

5️⃣ Creating and Starting Threads
threading.Thread(target=task).start()

Explanation

A new thread is created in each iteration.

target=task → each thread runs task().

.start() immediately starts the thread.

๐Ÿ‘‰ So, 3 threads run concurrently, each printing "X".

6️⃣ Printing from Main Thread
print("Done")

Explanation

This line runs in the main thread.

It prints:

Done


There is no join(), so:

Main thread does not wait for child threads.

Execution order becomes unpredictable.

Outputs
Case 1
X
X
X
Done

Python Coding challenge - Day 1086| What is the output of the following Python Code?

 



Code Explanation:

1️⃣ Defining Class Multiplier
class Multiplier:

A class named Multiplier is created.

Objects created from this class will have its methods and variables.

๐Ÿ”น 2️⃣ Defining the Constructor __init__
def __init__(self):
    self.value = 1

__init__ runs when an object is created.

self.value = 1 creates an instance variable named value.

So when an object is created:

value = 1

๐Ÿ”น 3️⃣ Defining the __call__ Method
def __call__(self):

__call__ is a special method.

It allows the object to be called like a function.

Example:

m() → calls m.__call__()

๐Ÿ”น 4️⃣ Updating the Instance Variable
self.value *= 3

This means:

self.value = self.value * 3

So the value triples each time the method is called.

๐Ÿ”น 5️⃣ Returning the Updated Value
return self.value

After multiplying the value, the updated value is returned.

๐Ÿ”น 6️⃣ Creating an Object
m = Multiplier()

An instance m of class Multiplier is created.

Constructor runs:

self.value = 1

๐Ÿ”น 7️⃣ Calling the Object
print(m(), m(), m())

Because of __call__, this is equivalent to:

print(m.__call__(), m.__call__(), m.__call__())

Python executes them from left to right.

๐Ÿ” Step-by-Step Execution
Step 1️⃣
m()
value = 1 * 3
value = 3

Return:

3
Step 2️⃣
m()
value = 3 * 3
value = 9

Return:

9
Step 3️⃣
m()
value = 9 * 3
value = 27

Return:

27

✅ Final Output
3 9 27

Python Coding challenge - Day 1085| What is the output of the following Python Code?

 


Code Explanation:

1️⃣ Defining Class Tracker
class Tracker:

A class named Tracker is created.

Objects of this class will inherit its variables and methods.

๐Ÿ”น 2️⃣ Defining a Class Variable
total = 0

total is a class variable.

It belongs to the class itself, not to individual objects.

Internally:

Tracker.total = 0

All instances will use the same variable.

๐Ÿ”น 3️⃣ Defining Method add
def add(self):

add is an instance method.

self refers to the object calling the method (t1 or t2).

๐Ÿ”น 4️⃣ Updating the Class Variable
Tracker.total += 2

This increases the class variable total by 2.

Important:

We are modifying the class variable directly:

Tracker.total

So every object sees the updated value.

๐Ÿ”น 5️⃣ Returning the Updated Value
return Tracker.total

After incrementing, the method returns the new value of total.

๐Ÿ”น 6️⃣ Creating First Object
t1 = Tracker()

Creates an instance named t1.

At this moment:

Tracker.total = 0

๐Ÿ”น 7️⃣ Creating Second Object
t2 = Tracker()

Creates another instance t2.

Both objects share the same variable:

Tracker.total

๐Ÿ”น 8️⃣ Calling the Methods
print(t1.add(), t2.add(), t1.add())

Python executes each call left to right.

Step 1️⃣
t1.add()

Execution:

Tracker.total = 0 + 2

Now:

Tracker.total = 2

Return value:

2
Step 2️⃣
t2.add()

Execution:

Tracker.total = 2 + 2

Now:

Tracker.total = 4

Return value:

4
Step 3️⃣
t1.add()

Execution:

Tracker.total = 4 + 2

Now:

Tracker.total = 6

Return value:

6

✅ Final Output
2 4 6

Python Coding challenge - Day 1082| What is the output of the following Python Code?

 


Code Explanation:

๐Ÿ”น 1️⃣ Defining Descriptor Class D
class D:

A new class D is created.

This class will act as a descriptor because it defines the method __get__.

๐Ÿ”น 2️⃣ Defining the __get__ Method
def __get__(self, obj, objtype):
    return 50

This method is automatically called when the attribute is accessed.

Parameters:

self → the descriptor object

obj → the instance accessing the attribute (a)

objtype → the class of the instance (A)

In this code:

return 50

So whenever the attribute is accessed, 50 will be returned.

๐Ÿ”น 3️⃣ Defining Class A
class A:

A new class A is created.

๐Ÿ”น 4️⃣ Assigning Descriptor to Class Attribute
x = D()

Here an instance of class D is assigned to the class variable x.

So internally:

A.x → descriptor object

This means x is now controlled by the descriptor D.

๐Ÿ”น 5️⃣ Creating an Object of Class A
a = A()

An instance a of class A is created.

At this moment:

a.__dict__ = {}

No instance attributes exist yet.

๐Ÿ”น 6️⃣ Accessing a.x
print(a.x)

Python performs attribute lookup.

Steps:

Step 1

Check instance dictionary

a.__dict__

No x found.

Step 2

Check class attributes

A.x

Found → descriptor object D.

Step 3

Since it is a descriptor, Python calls:

D.__get__(descriptor, a, A)

Inside the method:

return 50

✅ Final Output
50


Python Coding challenge - Day 1083| What is the output of the following Python Code?

 


Code Explanation:

๐Ÿ”น 1️⃣ Defining Descriptor Class D
class D:

A class named D is created.

This class will act as a descriptor.

๐Ÿ”น 2️⃣ Defining the __get__ Method
def __get__(self, obj, objtype):
    return 100

This method runs when the attribute is accessed.

Parameters:

self → descriptor object

obj → instance accessing the attribute (a)

objtype → class of the instance (A)

Behavior:

return 100

So whenever the attribute is accessed, the value 100 is returned.

๐Ÿ”น 3️⃣ Defining the __set__ Method
def __set__(self, obj, value):
    obj.__dict__['x'] = value

This method runs when the attribute is assigned.

Example:

a.x = 5

Execution:

obj.__dict__['x'] = value

So internally Python stores:

a.__dict__['x'] = 5

๐Ÿ”น 4️⃣ Defining Class A
class A:

A class named A is created.

๐Ÿ”น 5️⃣ Assigning Descriptor to Class Attribute
x = D()

Here:

An instance of D is assigned to x.

Internally:

A.x → descriptor object

So x is now controlled by the descriptor D.

๐Ÿ”น 6️⃣ Creating an Object
a = A()

An instance a of class A is created.

Initially:

a.__dict__ = {}

๐Ÿ”น 7️⃣ Assigning Value to a.x
a.x = 5

Python sees that x is a descriptor with __set__.

So it calls:

D.__set__(descriptor, a, 5)

Inside the method:

a.__dict__['x'] = 5

Now:

a.__dict__ = {'x': 5}

๐Ÿ”น 8️⃣ Accessing a.x
print(a.x)

Now Python performs attribute lookup.

Lookup order:

1️⃣ Data descriptor

Since x is a data descriptor, Python calls:

D.__get__(descriptor, a, A)

Inside __get__:

return 100

๐Ÿ”น 9️⃣ Instance Dictionary Ignored

Even though:

a.__dict__['x'] = 5

Python ignores it because:

⚠ Data descriptors take priority over instance attributes.

✅ Final Output
100

๐Ÿ”ป Day 30: Funnel Chart in Python

 

๐Ÿ”ป Day 30: Funnel Chart in Python

๐Ÿ”น What is a Funnel Chart?

A Funnel Chart visualizes a process where data moves through stages, typically showing decrease at each step.

It’s called a funnel because the shape narrows as values drop.


๐Ÿ”น When Should You Use It?

Use a funnel chart when:

  • Showing conversion stages

  • Tracking sales pipeline

  • Visualizing process drop-offs

  • Analyzing user journey steps


๐Ÿ”น Example Scenario

Website Conversion Funnel:

  1. Website Visitors

  2. Product Views

  3. Add to Cart

  4. Purchases

Each stage usually has fewer users than the previous one.


๐Ÿ”น Key Idea Behind It

๐Ÿ‘‰ Top stage = largest value
๐Ÿ‘‰ Each next stage = reduced value
๐Ÿ‘‰ Highlights where drop-offs happen


๐Ÿ”น Python Code (Funnel Chart using Plotly)

import plotly.graph_objects as go stages = ["Visitors", "Product Views", "Add to Cart", "Purchases"] values = [1000, 700, 400, 200] fig = go.Figure(go.Funnel( y=stages, x=values ))
fig.update_layout(title="Website Conversion Funnel")

fig.show()


๐Ÿ“Œ Install Plotly if needed:

pip install plotly

๐Ÿ”น Output Explanation

  • Top section = maximum users

  • Funnel narrows at each stage

  • Visually shows conversion drop

  • Interactive hover details


๐Ÿ”น Funnel Chart vs Bar Chart

AspectFunnel ChartBar Chart
Process stagesExcellentGood
Drop-off clarityVery HighMedium
StorytellingStrongNeutral
Business analyticsIdealUseful

๐Ÿ”น Key Takeaways

  • Perfect for sales & marketing analysis

  • Quickly identifies bottlenecks

  • Best for sequential processes

  • Very popular in business dashboards

๐ŸŒž Day 29: Sunburst Chart in Python

 

๐ŸŒž Day 29: Sunburst Chart in Python

๐Ÿ”น What is a Sunburst Chart?

A Sunburst Chart is a circular hierarchical visualization where:

  • Inner rings represent parent categories

  • Outer rings represent child categories

  • Each segment’s size shows its proportion

Think of it as a radial treemap.


๐Ÿ”น When Should You Use It?

Use a sunburst chart when:

  • Your data is hierarchical

  • You want to show part-to-whole at multiple levels

  • Structure is more important than exact values

Avoid it for precise numeric comparison.


๐Ÿ”น Example Scenario

  • Company → Department → Team performance

  • Website → Section → Page views

  • Product → Category → Sub-category sales


๐Ÿ”น Key Idea Behind It

๐Ÿ‘‰ Center = top-level category
๐Ÿ‘‰ Rings expand outward for deeper levels
๐Ÿ‘‰ Angle/area represents contribution


๐Ÿ”น Python Code (Sunburst Chart)

import plotly.express as px
import pandas as pd
data = pd.DataFrame({
"category": ["Electronics", "Electronics", "Clothing", "Clothing"],
"subcategory": ["Mobiles", "Laptops", "Men", "Women"],
"value": [40, 30, 20, 10] }
) fig = px.sunburst(
data, path=['category', 'subcategory'],
values='value',
title='Sales Distribution by Category'
)
fig.show()

๐Ÿ“Œ Install Plotly if needed:

pip install plotly

๐Ÿ”น Output Explanation

  • Inner circle shows main categories

  • Outer ring breaks them into subcategories

  • Larger segments indicate higher contribution

  • Interactive (hover & zoom)


๐Ÿ”น Sunburst vs Treemap

AspectSunburstTreemap
ShapeCircularRectangular
Hierarchy clarityHighMedium
Space efficiencyMediumHigh
Visual appealHighMedium

๐Ÿ”น Key Takeaways

  • Best for hierarchical storytelling

  • Interactive charts work best

  • Avoid too many levels

  • Great for dashboards & reports


๐Ÿš€ Day 3/150 – Subtract Two Numbers in Python


 ๐Ÿš€ Day 3/150 – Subtract Two Numbers in Python

Let’s explore several methods.


1️⃣ Basic Subtraction (Direct Method)

The simplest way to subtract two numbers is by using the - operator.

a = 10 b = 5 result = a - b print(result)



2️⃣ Taking User Input

In real programs, numbers often come from user input rather than being predefined.

a = int(input("Enter first number: ")) b = int(input("Enter second number: ")) print("Difference:", a - b)



























Here we use input() to take values from the user and int() to convert them into integers.



3️⃣ Using a Function

Functions help make code reusable and organized.

def subtract(x, y): return x - y print(subtract(10, 5))




The function subtract() takes two parameters and returns their difference.


4️⃣ Using a Lambda Function (One-Line Function)

A lambda function is a small anonymous function written in a single line.

subtract = lambda x, y: x - y print(subtract(10, 5))


Lambda functions are useful when you need a short, temporary function.

5️⃣ Using the operator Module

Python also provides built-in modules that perform mathematical operations. 

import operator print(operator.sub(10, 5))


The operator.sub() function performs the same subtraction operation.

6️⃣ Using List and reduce()

Another approach is to store numbers in a list and apply a reduction operation.

from functools import reduce numbers = [10, 5] result = reduce(lambda x, y: x - y, numbers) print(result)














reduce() applies the function cumulatively to the items in the list.


๐ŸŽฏ Conclusion

There are many ways to subtract numbers in Python. The most common method is using the - operator, but functions, lambda expressions, and built-in modules provide more flexibility in larger programs.

In this series, we explore multiple approaches so you can understand Python more deeply and write better code.

๐Ÿ“Œ Next in the series: Multiply Two Numbers in Python





























Popular Posts

Categories

100 Python Programs for Beginner (119) AI (224) Android (25) AngularJS (1) Api (7) Assembly Language (2) aws (28) Azure (9) BI (10) Books (262) Bootcamp (1) C (78) C# (12) C++ (83) Course (86) Coursera (300) Cybersecurity (29) data (5) Data Analysis (27) Data Analytics (20) data management (15) Data Science (331) Data Strucures (16) Deep Learning (135) Django (16) Downloads (3) edx (21) Engineering (15) Euron (30) Events (7) Excel (19) Finance (10) flask (4) flutter (1) FPL (17) Generative AI (68) Git (10) Google (50) Hadoop (3) HTML Quiz (1) HTML&CSS (48) IBM (41) IoT (3) IS (25) Java (99) Leet Code (4) Machine Learning (264) Meta (24) MICHIGAN (5) microsoft (11) Nvidia (8) Pandas (13) PHP (20) Projects (32) pytho (1) Python (1267) Python Coding Challenge (1090) Python Mistakes (50) Python Quiz (451) Python Tips (5) Questions (3) R (72) React (7) Scripting (3) security (4) Selenium Webdriver (4) Software (19) SQL (46) Udemy (17) UX Research (1) web application (11) Web development (8) web scraping (3)

Followers

Python Coding for Kids ( Free Demo for Everyone)