Monday, 9 March 2026
Python Coding challenge - Day 1069| What is the output of the following Python Code?
Python Developer March 09, 2026 Python Coding Challenge No comments
Code Explanation:
1. Defining Class A
class A:
Explanation:
This line defines a class named A.
A class is a template used to create objects.
2. Defining __getattribute__ Method
def __getattribute__(self, name):
Explanation:
__getattribute__ is a special (magic) method in Python.
It is automatically called every time any attribute of an object is accessed.
self → the current object
name → the attribute name being accessed.
Example:
When we write a.x, Python internally calls:
a.__getattribute__("x")
3. Checking if Attribute Name is "x"
if name == "x":
return 10
Explanation:
If the attribute being accessed is x, the method returns 10.
This means any access to a.x will return 10, even if x is not defined.
So:
a.x → 10
4. Accessing Default Attribute Behavior
return super().__getattribute__(name)
Explanation:
If the attribute is not "x", Python calls the parent class implementation of __getattribute__.
super() refers to the base object behavior.
This line tells Python to look for the attribute normally.
If the attribute exists → return it.
If it does not exist → Python will trigger __getattr__.
5. Defining __getattr__
def __getattr__(self, name):
return 20
Explanation:
__getattr__ is another special method.
It is called only when the attribute is not found normally.
It returns 20 for any missing attribute.
So if an attribute does not exist, Python returns:
20
6. Creating an Object
a = A()
Explanation:
This creates an object a of class A.
7. Printing Attributes
print(a.x, a.y)
Python evaluates this in two parts.
7.1 Accessing a.x
Python calls:
a.__getattribute__("x")
Inside __getattribute__:
name == "x" → True
Returns 10
So:
a.x → 10
7.2 Accessing a.y
Python calls:
a.__getattribute__("y")
Inside __getattribute__:
name == "x" → False
Calls:
super().__getattribute__("y")
But y does not exist in the object.
So Python calls:
__getattr__("y")
This returns:
20
So:
a.y → 20
8. Final Output
10 20
900 Days Python Coding Challenges with Explanation
Day 48: Beeswarm Plot in Python ๐๐
A Beeswarm Plot (also called a Swarm Plot) is a powerful visualization used to display the distribution of data points across different categories. Unlike a simple scatter plot, a beeswarm plot adjusts the position of points so they don’t overlap, making it easier to see how data is spread within each category.
In this example, we use the Iris dataset to visualize how petal length varies across different flower species.
๐น Why Use a Beeswarm Plot?
Beeswarm plots are useful when you want to:
-
Show individual data points
-
Understand the distribution of values
-
Compare multiple categories
-
Avoid overlapping points like in regular scatter plots
They are commonly used in data analysis, exploratory data science, and statistical visualization.
๐ Dataset Used
We are using the Iris dataset, one of the most popular datasets in machine learning and statistics.
The dataset contains measurements of iris flowers including:
-
Sepal Length
-
Sepal Width
-
Petal Length
-
Petal Width
-
Species
The three species are:
-
Setosa
-
Versicolor
-
Virginica
In this visualization, we compare petal length across these species.
๐ง Python Code
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
import pandas as pd
# Load dataset
iris = load_iris()
# Create dataframe
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df["Species"] = iris.target_names[iris.target]
# Create Beeswarm Plot
plt.figure(figsize=(8,5))
sns.swarmplot(data=df, x="Species", y="petal length (cm)")
# Title
plt.title("Beeswarm Plot: Petal Length by Species")
plt.tight_layout()
plt.show()
๐ Code Explanation
1️⃣ Import Libraries
We import the required libraries:
-
Seaborn → for statistical visualizations
-
Matplotlib → for plotting
-
Scikit-learn → to load the Iris dataset
-
Pandas → for data manipulation
2️⃣ Load the Dataset
iris = load_iris()
This loads the iris dataset from scikit-learn.
3️⃣ Create a DataFrame
df = pd.DataFrame(iris.data, columns=iris.feature_names)
We convert the dataset into a pandas DataFrame for easier handling.
Then we add the species column:
df["Species"] = iris.target_names[iris.target]
4️⃣ Create the Beeswarm Plot
sns.swarmplot(data=df, x="Species", y="petal length (cm)")
This line creates the beeswarm plot where:
-
x-axis → flower species
-
y-axis → petal length
-
Each dot represents one observation
The swarm algorithm spreads points horizontally to avoid overlap.
5️⃣ Add Title and Display
plt.title("Beeswarm Plot: Petal Length by Species")
plt.show()
This adds a chart title and displays the plot.
๐ What Insights Can We See?
From the beeswarm plot:
-
Setosa flowers have small petal lengths
-
Versicolor has medium petal lengths
-
Virginica generally has larger petals
The plot clearly shows distinct clusters for each species, which is why the Iris dataset is often used for classification problems in machine learning.
๐ When Should You Use Beeswarm Plots?
Use beeswarm plots when you want to:
-
Show raw data points
-
Compare distributions across categories
-
Avoid overlapping points
-
Perform exploratory data analysis
They are especially useful in data science, biology, statistics, and machine learning.
๐ฏ Conclusion
The Beeswarm Plot is a simple yet powerful way to visualize categorical data distributions while preserving individual data points. Using Seaborn in Python, creating this plot becomes quick and effective for exploring patterns within your dataset.
In just a few lines of code, we were able to visualize petal length differences across iris species, revealing clear distinctions between the groups.
Python Coding Challenge - Question with Answer (ID -090326)
Python Coding March 09, 2026 Python Quiz No comments
1️⃣ range(3)
range(3) generates numbers starting from 0 up to 2.
So the values will be:
0, 1, 2
2️⃣ for Loop Execution
The loop runs three times.
Iteration steps:
| Iteration | Value of i | Output |
|---|---|---|
| 1 | 0 | 0 |
| 2 | 1 | 1 |
| 3 | 2 | 2 |
So the loop prints:
0
1
2
3️⃣ else with for Loop
In Python, a for loop can have an else block.
The else block executes only if the loop finishes normally (no break statement).
Since there is no break in this loop, the else block runs.
4️⃣ Final Output
0
1
2
Done
⚡ Important Concept
If we add break, the else will not run.
Example:
for i in range(3):
print(i)
break
else:
print("Done")
Output:
0
Done will not print because the loop stopped using break.
AUTOMATING EXCEL WITH PYTHON
Python Coding challenge - Day 1068| What is the output of the following Python Code?
Python Developer March 09, 2026 Python Coding Challenge No comments
Code Explanation:
Python Coding challenge - Day 1067| What is the output of the following Python Code?
Python Developer March 09, 2026 Python Coding Challenge No comments
Code Explanation:
Saturday, 7 March 2026
Python Coding Challenge - Question with Answer (ID -080326)
Explanation:
Python for Cybersecurity
๐ Day 47: Mosaic Plot in Python
Day 47: Mosaic Plot in Python
On Day 47 of our Data Visualization series, we explored a powerful chart for analyzing relationships between categorical variables — the Mosaic Plot.
When you want to understand how two (or more) categorical variables interact with each other, Mosaic Plots provide a clear and intuitive visual representation.
Today, we applied it to the classic Iris dataset to examine the relationship between Species and Petal Size category.
๐ฏ What is a Mosaic Plot?
A Mosaic Plot is a graphical method for visualizing contingency tables (cross-tabulated categorical data).
It represents:
-
Categories as rectangles
-
Width proportional to one variable
-
Height proportional to another variable
-
Area representing frequency or proportion
๐ The larger the rectangle, the higher the frequency of that category combination.
๐ Dataset Used: Iris Dataset
The Iris dataset contains:
-
Sepal Length
-
Sepal Width
-
Petal Length
-
Petal Width
-
Species (Setosa, Versicolor, Virginica)
For this visualization, we:
-
Used Species as one categorical variable
-
Converted Petal Length into 3 categories:
-
Small
-
Medium
-
Large
-
This helps us visually compare petal size distribution across species.
๐ง๐ป Python Implementation
✅ Step 1: Import Libraries
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.graphics.mosaicplot import mosaic
from sklearn.datasets import load_iris
-
Pandas → Data manipulation
-
Matplotlib → Plot rendering
-
Statsmodels → Mosaic plot function
-
Scikit-learn → Dataset loading
✅ Step 2: Load and Prepare Data
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df["Species"] = iris.target_names[iris.target]
We convert numeric species labels into readable category names.
✅ Step 3: Create Petal Size Categories
df["Petal Size"] = pd.cut(
df["petal length (cm)"],
3,
labels=["Small", "Medium", "Large"]
)
Here we divide petal length into three equal bins.
This transforms a continuous variable into a categorical one.
✅ Step 4: Create the Mosaic Plot
plt.figure(figsize=(9, 5))
mosaic(df, ["Species", "Petal Size"], gap=0.02)
plt.title("Mosaic Plot: Species vs Petal Size")
plt.tight_layout()
plt.show()
Key Parameters:
["Species", "Petal Size"] → Defines categorical relationship
gap=0.02 → Adds spacing between tiles
figsize → Controls plot size
๐ What the Visualization Reveals
From the Mosaic Plot:
๐ธ Setosa
-
Almost entirely in the Small petal category
-
Very little variation
๐ฟ Versicolor
-
Mostly in the Medium category
-
Some overlap into Small and Large
๐บ Virginica
-
Dominantly in the Large category
-
Some presence in Medium
๐ Key Insight
The Mosaic Plot clearly shows that:
-
Petal size is strongly associated with species
-
Species are well separated based on petal length
-
This confirms why petal measurements are highly important features in classification models
Even without machine learning, we can visually detect separation patterns.
๐ก Why Use Mosaic Plots?
✔ Excellent for categorical comparisons
✔ Shows proportional relationships clearly
✔ Works well with contingency tables
✔ Helpful in statistical analysis
✔ Easy to interpret once understood
๐ Real-World Applications
-
Marketing: Customer segment vs product category
-
Healthcare: Disease type vs severity level
-
Education: Grade vs performance category
-
Business: Region vs sales category
-
Survey Analysis
๐ Day 47 Takeaway
Mosaic Plots transform categorical relationships into visual area comparisons.
They help you:
-
Understand category dominance
-
Identify imbalances
-
Discover associations
-
Validate statistical assumptions
Python Coding challenge - Day 1066| What is the output of the following Python Code?
Python Developer March 07, 2026 Python Coding Challenge No comments
Code Explanation:
Python Coding challenge - Day 1065| What is the output of the following Python Code?
Python Developer March 07, 2026 Python Coding Challenge No comments
Code Explanation:
Friday, 6 March 2026
Day 45: Cluster Plot in Python
Day 45: Cluster Plot in Python (K-Means Explained Simply)
Today we’re visualizing how machines group data automatically using K-Means clustering.
No labels.
No supervision.
Just patterns.
Let’s break it down ๐
๐ง What is Clustering?
Clustering is an unsupervised learning technique where the algorithm groups similar data points together.
Imagine:
-
Customers with similar buying habits
-
Students with similar scores
-
Products with similar features
The machine finds patterns without being told the answers.
๐ What is K-Means?
K-Means is one of the most popular clustering algorithms.
It works in 4 simple steps:
-
Choose number of clusters (K)
-
Randomly place K centroids
-
Assign points to nearest centroid
-
Move centroids to the average of assigned points
-
Repeat until stable
That’s it.
๐ What This Code Does
1️⃣ Import Libraries
numpy → create data
matplotlib → visualization
KMeans from sklearn → clustering algorithm
2️⃣ Generate Random Data
X = np.random.rand(100, 2)This creates:
-
100 data points
-
2 features (x and y coordinates)
So we get 100 dots on a 2D plane.
3️⃣ Create K-Means Model
kmeans = KMeans(n_clusters=3, random_state=42)
We tell the model:
๐ Create 3 clusters.
4️⃣ Train the Model
kmeans.fit(X)Now the algorithm:
-
Finds patterns
-
Groups points
-
Calculates cluster centers
5️⃣ Get Results
labels = kmeans.labels_centroids = kmeans.cluster_centers_
labels → Which cluster each point belongs to
centroids → Center of each cluster
6️⃣ Visualize the Clusters
plt.scatter(X[:, 0], X[:, 1], c=labels)Each cluster gets a different color.
Then we plot centroids using:
marker='X', s=200Big X marks = cluster centers.
๐ What the Graph Shows
-
Different colors → Different clusters
-
Big X → Center of each cluster
-
Points closer to a centroid belong to that cluster
The algorithm has automatically discovered structure in random data.
That’s powerful.
๐ง Core Learning From This
Don’t memorize the code.
Understand the pattern:
Create DataChoose KGet LabelsFit ModelVisualize
That’s the real workflow.
๐ Where K-Means Is Used in Real Life
-
Customer segmentation
-
Image compression
-
Market basket analysis
-
Recommendation systems
-
Anomaly detection
๐ก Why This Matters
Clustering is one of the first steps into Machine Learning.
If you understand this:
You’re no longer just plotting charts.
You’re analyzing patterns.
Thursday, 5 March 2026
Data Science with Python - Basics
Python Developer March 05, 2026 Data Science, Python No comments
Introduction
Data science has become one of the most important fields in the modern digital world. Organizations rely on data to understand trends, predict outcomes, and make smarter decisions. To work effectively with data, professionals need tools that allow them to analyze, visualize, and interpret information efficiently. One of the most popular tools for this purpose is Python, a versatile programming language widely used in data analysis and machine learning.
The book “Data Science with Python – Basics” by Aditya Raj introduces readers to the fundamental concepts of data science and demonstrates how Python can be used to perform data analysis and build useful insights from datasets. The book is designed as a beginner-friendly guide that explains the essential skills required to start a career or learning journey in data science. It contains around 186 pages and focuses on practical understanding rather than complex theory.
Understanding Data Science
Data science is the process of extracting meaningful insights from data using analytical techniques, programming, and statistical methods. It combines several disciplines, including mathematics, computer science, and domain knowledge.
The book explains how data scientists work with data throughout the entire pipeline. This process generally includes:
-
Collecting data from different sources
-
Cleaning and preparing the data
-
Analyzing patterns and relationships
-
Building predictive models
-
Communicating results through visualizations
Understanding these steps helps beginners see how raw information can be transformed into valuable insights.
Why Python is Important for Data Science
Python has become one of the most widely used programming languages in the data science community. Its simple syntax and powerful libraries make it accessible to beginners while still being capable of handling complex analytical tasks. Python supports multiple programming styles and includes built-in data structures that help developers build applications quickly.
In the book, Python is used to demonstrate how data analysis tasks can be performed efficiently. Learners are introduced to common Python tools and libraries that are widely used in the industry. These tools allow users to manipulate data, perform calculations, and visualize results.
Core Topics Covered in the Book
The book focuses on building a strong foundation in data science using Python. Some of the major topics typically covered include:
Python Programming Fundamentals
Readers first learn the basics of Python programming, including variables, data types, loops, and functions. These concepts are essential for writing scripts that process and analyze data.
Data Manipulation and Analysis
Data scientists often work with large datasets. The book introduces methods for reading, cleaning, and transforming data so that it can be analyzed effectively.
Data Visualization
Visual representation of data helps people understand patterns and trends quickly. Learners explore techniques for creating charts and graphs that make complex information easier to interpret.
Introduction to Machine Learning Concepts
Although the book focuses on fundamentals, it also introduces the idea of machine learning—where algorithms learn patterns from data and make predictions.
These topics give readers a broad understanding of how data science workflows operate in real-world scenarios.
Skills Readers Can Develop
After studying this book, readers can develop several valuable skills, including:
-
Understanding the basic workflow of data science projects
-
Writing Python code for data analysis tasks
-
Cleaning and preparing datasets for analysis
-
Visualizing data to uncover patterns and insights
-
Building a foundation for learning machine learning and advanced analytics
These skills form the starting point for anyone interested in becoming a data analyst or data scientist.
Who Should Read This Book
“Data Science with Python – Basics” is particularly suitable for:
-
Students who want to start learning data science
-
Beginners with little or no programming experience
-
Professionals interested in switching to a data-driven career
-
Anyone curious about how Python is used in data analysis
Because the book focuses on fundamental concepts, it serves as a stepping stone toward more advanced topics in machine learning and artificial intelligence.
Hard Copy: Data Science with Python - Basics
Kindle: Data Science with Python - Basics
Conclusion
“Data Science with Python – Basics” provides a clear and accessible introduction to the world of data science. By combining simple explanations with practical examples, the book helps beginners understand how data can be analyzed and interpreted using Python.
For anyone starting their journey in data science, learning Python and understanding the basic workflow of data analysis are essential first steps. This book offers a solid foundation for developing those skills and prepares readers for deeper exploration of machine learning, data analytics, and artificial intelligence in the future.
The AI Edge: How to Thrive Within Civilization's Next Big Disruption
Introduction
Artificial intelligence is rapidly transforming the world, influencing industries, careers, and everyday life. From automated systems and data-driven decision-making to intelligent assistants and advanced analytics, AI is becoming a powerful force shaping the future. As technological progress accelerates, individuals and organizations must learn how to adapt and thrive in this evolving landscape.
The AI Edge: How to Thrive Within Civilization’s Next Big Disruption, organized by Erik Seversen and written with contributions from dozens of global AI experts, explores how artificial intelligence is reshaping society and what people can do to remain competitive in this new era. The book offers practical insights and real-world perspectives on how individuals, businesses, and professionals can leverage AI to improve productivity, innovation, and decision-making.
Understanding the AI Revolution
The book begins by explaining that humanity is entering a new technological transformation similar in scale to previous revolutions such as the Industrial Revolution and the Digital Age. Artificial intelligence is no longer just a research topic—it is becoming integrated into everyday tools, workflows, and industries.
AI technologies are now capable of analyzing large amounts of data, identifying patterns, generating creative content, and assisting humans in complex decision-making processes. As these systems continue to evolve, they will reshape how businesses operate, how professionals work, and how society functions overall.
The book emphasizes that understanding AI is no longer optional. Developing AI literacy—the ability to understand and work with intelligent systems—is becoming an essential skill for modern professionals.
Learning to Work Alongside AI
One of the central ideas of the book is that AI should not be viewed as a replacement for human intelligence but as a tool that enhances human capabilities. Rather than eliminating human roles entirely, AI can help people perform tasks faster, analyze information more effectively, and focus on higher-level creative and strategic thinking.
Professionals who learn how to collaborate with AI technologies can gain a significant advantage. The book describes this advantage as the “AI Edge”—the competitive benefit gained by individuals who understand how to use artificial intelligence effectively in their work and decision-making processes.
By embracing AI tools, workers can improve productivity, automate repetitive tasks, and unlock new opportunities for innovation.
Insights from Global AI Experts
A distinctive feature of the book is its collaborative nature. It includes insights from 34 experts from around the world, representing fields such as technology, healthcare, business, entrepreneurship, education, and creative industries.
Each contributor provides a unique perspective on how artificial intelligence is transforming their specific field. These perspectives highlight the wide-ranging impact of AI across society and demonstrate how different sectors are adapting to technological change.
Through these real-world examples, readers gain a broader understanding of how AI is already influencing industries and what changes may occur in the near future.
AI’s Impact on Work and Innovation
One of the key themes explored in the book is the changing nature of work. As AI systems become more capable, many routine and repetitive tasks can be automated. However, this shift also creates new opportunities for human creativity, innovation, and problem-solving.
The book encourages readers to develop skills that complement AI technologies, such as critical thinking, adaptability, creativity, and leadership. These human-centered abilities will remain valuable even as intelligent systems become more advanced.
Organizations that integrate AI effectively into their operations will likely gain significant advantages in productivity, efficiency, and innovation.
Ethical and Responsible AI Adoption
Another important aspect discussed in the book is the responsible use of artificial intelligence. As AI systems become more powerful, questions about ethics, accountability, and societal impact become increasingly important.
The book highlights the need for thoughtful and responsible AI adoption. This includes ensuring transparency in AI systems, addressing potential biases in algorithms, and maintaining human oversight in decision-making processes.
By approaching AI with awareness and responsibility, society can maximize its benefits while minimizing potential risks.
Preparing for an AI-Driven Future
A major message of the book is that the future belongs to those who are willing to learn and adapt. Artificial intelligence will continue to influence nearly every profession and industry, making it important for individuals to stay informed and develop relevant skills.
The book encourages readers to embrace curiosity and continuous learning. By understanding how AI works and how it can be applied in different contexts, individuals can position themselves to succeed in a rapidly evolving technological environment.
Rather than fearing technological disruption, the book presents AI as an opportunity for growth and transformation.
Hard Copy: The AI Edge: How to Thrive Within Civilization's Next Big Disruption
Kindle: The AI Edge: How to Thrive Within Civilization's Next Big Disruption
Conclusion
The AI Edge: How to Thrive Within Civilization’s Next Big Disruption offers a thoughtful and practical guide to navigating the age of artificial intelligence. Through insights from global experts and real-world examples, the book explains how AI is reshaping industries, careers, and society as a whole.
The key message is clear: artificial intelligence is not just a technological trend—it is a major shift that will define the future of work and innovation. Those who learn to understand and collaborate with AI will gain a powerful advantage in the years ahead.
By promoting AI literacy, adaptability, and responsible innovation, the book helps readers prepare for a world where humans and intelligent machines increasingly work together to solve complex challenges and create new opportunities.
Popular Posts
-
In a world increasingly shaped by data, the demand for professionals who can make sense of it has never been higher. Businesses, governmen...
-
If you're learning Python or looking to level up your skills, you’re in luck! Here are 6 amazing Python books available for FREE — c...
-
Large Language Models (LLMs) such as GPT, BERT, and other transformer-based systems have transformed the field of artificial intelligence....
-
1️⃣ range(3) range(3) generates numbers starting from 0 up to 2 . So the values will be: 0, 1, 2 2️⃣ for Loop Execution The loop runs thre...
-
1️⃣ Fundamentals of Deep Learning — Nikhil Buduma Best for: Beginners who want a structured foundation. This book introduces: Neural netw...
-
How This Modern Classic Teaches You to Think Like a Computer Scientist Programming is not just about writing code—it's about developi...
-
Explanation: 1. Creating a List nums = [1, 2, 3] Explanation: nums is a variable name. [1, 2, 3] is a list in Python. The list contains th...
-
1️⃣ List Creation x = [ 1 , 2 , 3 ] This creates a list named x containing three elements: [1, 2, 3] 2️⃣ List Slicing x [:: - 1 ] This ...
-
Code Explanation: 1. Defining Class A class A: Explanation: This line creates a class named A. A class is a blueprint for creating objects...
-
Code Explanation: ๐น 1️⃣ Defining Class A class A: Creates a class named A. Objects created from this class will inherit its methods and a...
.png)

.png)

