Tuesday, 28 April 2026

Python Coding challenge - Day 1141| What is the output of the following Python Code?

 


Code Explanation:

๐Ÿ”น 1. Importing Module
import copy
✅ Explanation:
Imports Python’s built-in copy module.
This module provides:
copy.copy() → shallow copy
copy.deepcopy() → deep copy

๐Ÿ”น 2. Creating Nested List
a = [[1,2],[3,4]]
✅ Explanation:
a is a list of lists (nested structure).
Memory structure:
Outer list contains two inner lists
a → [ [1,2], [3,4] ]

๐Ÿ”น 3. Deep Copy
b = copy.deepcopy(a)
✅ Explanation:
Creates a completely independent copy of a.
Both:
Outer list
Inner lists
are copied separately.
๐Ÿ” Important:
b ≠ a
b[0] ≠ a[0]

✔️ No shared references

๐Ÿ”น 4. Modifying Copied List
b[0][0] = 100
✅ Explanation:
Changes first element of first inner list in b
So:
b → [ [100,2], [3,4] ]

๐Ÿ”น 5. Printing Original List
print(a)
✅ Explanation:
Since a and b are completely independent,
Changes in b do NOT affect a

๐ŸŽฏ Final Output
[[1, 2], [3, 4]]

Python Coding challenge - Day 1140| What is the output of the following Python Code?

 


 Code Explanation:

๐Ÿ”น 1. Function Definition
def gen():
✅ Explanation:
A function gen is defined.
Because it uses yield, it becomes a generator function (not a normal function).


๐Ÿ”น 2. First yield
yield 1
✅ Explanation:
yield produces a value without ending the function.
It pauses execution and remembers its state.

๐Ÿ”น 3. Second yield
yield 2
✅ Explanation:
When resumed, the function continues from where it stopped.
Now it yields 2.

๐Ÿ”น 4. Third yield
yield 3
✅ Explanation:
On next resume, it yields 3.
After this, the generator is exhausted.

๐Ÿ”น 5. Creating Generator Object
g = gen()
✅ Explanation:
Calling gen() does NOT execute the function immediately.
It returns a generator object.
Execution starts only when next() is called.

๐Ÿ”น 6. First next() Call
print(next(g))
๐Ÿ” What happens:
Starts execution of gen()
Runs until first yield
✔️ Output:
1

๐Ÿ”น 7. Second next() Call
print(next(g))
๐Ÿ” What happens:
Resumes from previous pause
Continues to second yield

✔️ Output:
2

๐ŸŽฏ Final Output
1
2

Python Coding Challenge - Question with Answer (ID -280426)

 


Explanation:

1. Creating a Dictionary
d = {"a": 1}
Here, a dictionary named d is created.
A dictionary in Python stores data in key–value pairs.
In this case:
Key: "a"
Value: 1

So the dictionary looks like:

{"a": 1}

2. Using the .get() Method
print(d.get("b", 2))
➤ What .get() does:
The .get() method is used to retrieve the value of a key from a dictionary.

Syntax:

dictionary.get(key, default_value)
➤ In this example:
"b" is the key we are trying to access.
2 is the default value.

3. Key Lookup Behavior
The dictionary d does NOT contain the key "b".
Instead of throwing an error, .get():
Returns the default value, which is 2.

4. Output
2

Book: Python for Ethical Hacking Tools, Libraries, and Real-World Applications

April Python Bootcamp Day 17

 

Day 17: Web Scraping using Python

What is Web Scraping?

Web scraping is the process of extracting data from websites automatically using code instead of manually copying it.

It helps in:

  • Data collection
  • Automation
  • Building datasets
  • Market research

Tools Required

1. requests

  • Used to fetch webpage or API data
  • Works with HTTP requests

2. BeautifulSoup

  • Parses HTML content
  • Helps extract specific elements like headings, links, tables

Data Flow Understanding

HTML Scraping Flow:

Website → HTML → BeautifulSoup → Data

API Data Flow:

API Endpoint → JSON → Python → Data

Sample HTML File (index.html)

<!DOCTYPE html>
<html>
<head>
<title>Sample Webpage</title>
</head>
<body>

<h1>Main Heading</h1>
<h2>Sub Heading</h2>
<h3>Section Heading</h3>

<p>This is a paragraph about web scraping.</p>
<p>Python makes scraping easy using BeautifulSoup.</p>

<a href="https://www.google.com">Google</a><br>
<a href="https://www.github.com">GitHub</a>

<h2>Student Table</h2>
<table border="1">
<tr>
<th>Name</th>
<th>Age</th>
<th>City</th>
<th>Email</th>
</tr>
<tr>
<td>Piyush</td>
<td>21</td>
<td>Nagpur</td>
<td>piyush@example.com</td>
</tr>
<tr>
<td>Rahul</td>
<td>22</td>
<td>Pune</td>
<td>Rahul@gmail.com</td>
</tr>
</table>

</body>
</html>

Web Scraping using BeautifulSoup

from bs4 import BeautifulSoup

with open("index.html", "r", encoding="utf-8") as file:
html_content = file.read()

soup = BeautifulSoup(html_content, "html.parser")

# 1. Title
print(f"Title: {soup.title.text}")

# 2. Headings
for tag in ["h1", "h2", "h3"]:
for heading in soup.find_all(tag):
print(f"{tag.upper()}: {heading.text}")

# 3. Paragraphs
for p in soup.find_all("p"):
print(p.text)

# 4. Links
for a in soup.find_all("a"):
print(f"Text: {a.get_text()}, URL: {a.get('href')}")

# 5. Table Data
table = soup.find("table")
rows = table.find_all("tr")

for row in rows:
cols = row.find_all(["td", "th"])
data = [col.text.strip() for col in cols]
print(data)

# 6. Extract all text
print(soup.get_text(separator="\n"))

Web Scraping using APIs (JSON Data)

import requests

url = "https://jsonplaceholder.typicode.com/posts"

response = requests.get(url)

if response.status_code == 200:
data = response.json()

for post in data[:5]:
print(f"Title: {post['title']}")
print(f"Body: {post['body']}")
else:
print("Error:", response.status_code)

Advanced Example: Fetch API Data and Save to CSV

import requests
import csv

def fetch_api_data(url):
headers = {
"User-Agent": "Mozilla/5.0"
}
try:
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print("Error:", e)
return None

url = "https://jsonplaceholder.typicode.com/users"
data = fetch_api_data(url)

if data:
for user in data:
print(f"{user['name']} - {user['email']}")

with open("HR.csv", "w", newline="", encoding="utf-8") as file:
writer = csv.writer(file)
writer.writerow(['Name', 'Email'])

for user in data:
writer.writerow([user['name'], user['email']])

Key Concepts Learned

  • Difference between HTML scraping and API scraping
  • How to parse HTML using BeautifulSoup
  • Extracting headings, paragraphs, links, and tables
  • Fetching JSON data using requests
  • Saving extracted data into CSV files

Best Practices for Web Scraping

  • Always check website permissions (robots.txt)
  • Avoid sending too many requests (rate limiting)
  • Use headers like User-Agent
  • Prefer APIs over HTML scraping when available

Summary

Web scraping is a powerful skill for automating data extraction.
Using BeautifulSoup, you can extract structured data from HTML, while requests helps fetch data from APIs efficiently.


Assignment Questions

Theory-Based

  1. What is web scraping? Explain with an example.
  2. Difference between web scraping and API data fetching.
  3. What is BeautifulSoup used for?
  4. Why is JSON preferred in APIs?
  5. What are headers in HTTP requests and why are they used?

Practical Questions

  1. Extract all:
    • Headings (h1, h2, h3)
    • Paragraphs
    • Links
      from the given HTML file.
  2. Modify the code to:
    • Extract only emails from the table.
  3. Scrape:
    • Only table data and convert it into a list of dictionaries.
  4. Fetch data from:

    https://jsonplaceholder.typicode.com/comments
    • Print name and email of first 10 users.
  5. Save API data into a CSV file with:
    • ID, Name, Email

Challenge Tasks

  1. Build a scraper that:
    • Extracts all links from a webpage
    • Saves them into a text file
  2. Create a script that:
    • Scrapes table data
    • Converts it into JSON format
  3. Combine both:
    • Scrape HTML data
    • Store it in CSV
    • Also fetch API data and merge both datasets
  4. Add error handling:
    • Handle missing tags
    • Handle request failures


Monday, 27 April 2026

April Python Bootcamp Day 16

 



Day 16: Working with APIs using FastAPI

Introduction to APIs

An API (Application Programming Interface) acts as a bridge that allows two software systems to communicate with each other.

Basic Flow:

  • Client (You/App) → Sends a request
  • Server → Processes the request
  • Server → Sends a response

Real-Life Example: Weather App

  • App sends a request to a weather API
  • API processes it
  • API returns weather data (temperature, humidity, etc.)

Types of APIs

1. REST API

  • Most commonly used
  • Uses HTTP methods
  • Example: Food delivery apps (Swiggy, Zomato)

2. SOAP API

  • More secure, structured
  • Used in banking systems
  • Example: Bank transactions

3. GraphQL API

  • Fetch only required data
  • Flexible and efficient
  • Example: Modern web apps

HTTP Methods (Core of APIs)

MethodPurpose
GETFetch data
POSTSend data
PUTUpdate data
DELETEDelete data

What is FastAPI?

FastAPI is a modern Python web framework used to build APIs quickly and efficiently.

Key Features:

  • High performance (uses async)
  • Automatic API documentation (Swagger UI)
  • Easy to learn
  • Built-in validation using Pydantic

What is JSON?

JSON (JavaScript Object Notation) is the standard format used to exchange data between client and server.

Example:

{
"name": "Piyush",
"age": 21
}

What is Pydantic?

Pydantic ensures:

  • Data validation
  • Correct structure
  • Type safety

Basic FastAPI Example

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

@app.get("/")
def home():
return {"message": "Welcome to FastAPI!"}

@app.get("/users")
def get_users():
return ["Piyush", "Aman", "Amit"]

@app.get("/user/{user_id}")
def get_user(user_id: int):
return {"user_id": user_id}

@app.post("/create-user")
def create_user(user: dict):
return {
"message": "User created successfully",
"user": user
}

Using Pydantic Model

class User(BaseModel):
name: str
age: int

@app.post("/user-create")
def create_user1(user: User):
return {
"name": user.name,
"age": user.age
}

CRUD API Example (Students)

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

app = FastAPI()

class Student(BaseModel):
name: str
age: int

students = {}

@app.get("/students")
def get_students():
return students

@app.get("/students/{student_id}")
def get_student(student_id: int):
if student_id not in students:
raise HTTPException(status_code=404, detail="Student not found")
return students[student_id]

@app.post("/students/{student_id}")
def create_student(student_id: int, student: Student):
if student_id in students:
raise HTTPException(status_code=400, detail="Student already exists")
students[student_id] = student
return {"message": "Student created", "data": student}

@app.put("/students/{student_id}")
def update_student(student_id: int, student: Student):
if student_id not in students:
raise HTTPException(status_code=404, detail="Student not found")
students[student_id] = student
return {"message": "Student updated", "data": student}

@app.delete("/students/{student_id}")
def delete_student(student_id: int):
if student_id not in students:
raise HTTPException(status_code=404, detail="Student not found")
del students[student_id]
return {"message": "Student deleted"}

Summary

  • APIs enable communication between systems
  • HTTP methods define operations
  • FastAPI simplifies API development
  • JSON is used for data exchange
  • Pydantic ensures data validation
  • CRUD operations are essential for real-world APIs

Assignment Questions

Theory-Based

  1. What is an API? Explain with a real-world example.
  2. Differentiate between REST, SOAP, and GraphQL APIs.
  3. What are HTTP methods? Explain each with use cases.
  4. Why is JSON used in APIs?
  5. What is the role of Pydantic in FastAPI?

Practical Questions

  1. Create a FastAPI app with:
    • A GET route /hello returning "Hello World"
  2. Create an API to:
    • Add a product (name, price)
    • Get all products
  3. Build a CRUD API for:
    • Books (title, author, price)
  4. Modify the student API:
    • Add email field
    • Validate age should be greater than 5
  5. Create an API endpoint:
    • /square/{num} → returns square of a number

Challenge Task

Build a Mini User Management API:

  • Add user
  • Get all users
  • Update user
  • Delete user
  • Use Pydantic validation
  • Handle errors properly

Developing Machine Learning Solutions

 

Machine Learning (ML) is transforming industries—from healthcare and finance to e-commerce and entertainment. But building an effective ML system is not just about training a model—it’s about designing a complete solution that works in real-world environments.

The Coursera course Developing Machine Learning Solutions provides a practical introduction to the end-to-end machine learning lifecycle, helping learners understand how to move from raw data to deployed models using modern tools and best practices.


What is a Machine Learning Solution?

A machine learning solution is a system that uses data and algorithms to make predictions or decisions with minimal human intervention.

However, developing such a solution involves much more than coding. It includes:

  • Understanding the problem
  • Preparing and managing data
  • Training and evaluating models
  • Deploying and maintaining systems

The Machine Learning Lifecycle

One of the key highlights of this course is understanding the ML lifecycle, which includes several critical stages:

1. Problem Definition

Every ML project begins with identifying a clear problem. This involves understanding business goals and translating them into a machine learning task.

2. Data Collection and Preparation

Data is the foundation of ML. You need to gather relevant datasets, clean them, and prepare them for analysis.

3. Model Development

At this stage, algorithms are selected and trained to learn patterns from data. Different models may be tested to find the best fit.

4. Model Evaluation

Models are evaluated using performance metrics to ensure accuracy and reliability. The course emphasizes learning techniques to evaluate model performance effectively.

5. Deployment

Once validated, models are deployed into production environments where they can deliver real-world value.


Role of Cloud Platforms and AWS

A unique aspect of this course is its focus on using cloud-based tools, particularly Amazon Web Services (AWS).

Learners explore how to use services like:

  • AWS SageMaker for model building
  • Cloud infrastructure for scalability
  • Deployment pipelines for real-time predictions

This approach enables developers to build scalable and production-ready ML systems.


Understanding MLOps

Modern ML development doesn’t stop at deployment. The course introduces MLOps (Machine Learning Operations)—a set of practices that combine ML, DevOps, and data engineering.

Key benefits of MLOps include:

  • Automating workflows
  • Monitoring model performance
  • Ensuring continuous improvement

MLOps plays a crucial role in streamlining development and deployment processes.


Model Sources and Selection

The course also highlights that not all models need to be built from scratch. Developers can:

  • Use pre-trained models
  • Fine-tune existing solutions
  • Combine multiple models

This flexibility allows faster development and better performance depending on the use case.


Skills You Gain

By completing this course, learners develop practical skills such as:

  • Predictive modeling
  • Model evaluation techniques
  • Applied machine learning
  • Working with cloud ML tools
  • Understanding end-to-end ML workflows

Why This Course Matters

In today’s industry, companies are not just looking for people who understand algorithms—they want professionals who can build complete ML systems.

This course bridges the gap between theory and practice by focusing on:

  • Real-world workflows
  • Scalable infrastructure
  • Production-ready solutions

Join Now: Developing Machine Learning Solutions

Conclusion

Developing machine learning solutions is a multidisciplinary process that combines data, algorithms, and engineering practices.

The Developing Machine Learning Solutions course equips learners with the knowledge to handle this complexity—from understanding the ML lifecycle to deploying models using cloud platforms.

As machine learning continues to grow, mastering these skills will be essential for anyone looking to build impactful, real-world AI systems.

Responsible AI in the Generative AI Era

 



Artificial Intelligence is no longer a futuristic concept—it is deeply embedded in our daily lives. From chatbots generating human-like responses to tools creating images, videos, and code, Generative AI (GenAI) is transforming industries at an unprecedented pace. But with this power comes responsibility.

The rise of generative technologies has sparked important conversations around ethics, fairness, transparency, and accountability. This is where Responsible AI becomes crucial—ensuring that innovation does not come at the cost of societal harm.


What is Generative AI?

Generative AI refers to systems capable of creating new content—text, images, audio, and more—based on user prompts. Generative AI has gained massive popularity due to tools like ChatGPT and image generators.

While it offers immense benefits such as automation, creativity, and efficiency, it also introduces risks like misinformation, bias, and misuse.


Why Responsible AI Matters

Responsible AI is about designing, developing, and deploying AI systems in a way that is ethical, transparent, and aligned with human values.

According to Coursera’s learning resources, ethical AI use involves:

  • Avoiding harm
  • Respecting privacy
  • Ensuring fairness and inclusivity
  • Maintaining accountability

Without these principles, generative AI can amplify existing societal issues—such as bias in data or the spread of false information at scale.


Key Challenges in the Generative AI Era

1. Bias and Fairness

AI systems learn from data. If the data contains biases, the AI can replicate or even amplify them. This can lead to unfair outcomes in areas like hiring, lending, or content moderation.

2. Misinformation and Deepfakes

Generative AI can create highly realistic content, making it difficult to distinguish between real and fake. This raises concerns about misinformation, especially in media and politics.

3. Privacy Concerns

AI models often rely on large datasets, which may include sensitive or personal information. Protecting user data is a major ethical responsibility.

4. Lack of Transparency

Many AI systems operate as “black boxes,” making it hard to understand how decisions are made. This limits trust and accountability.

5. Intellectual Property Issues

Who owns AI-generated content? This question is still evolving, especially with concerns about training data and copyright.


Principles of Responsible AI

The Coursera course highlights foundational principles that guide responsible AI development:

✔ Fairness

AI systems should treat all users equally and avoid discrimination.

✔ Accountability

Organizations must take responsibility for AI outcomes and decisions.

✔ Transparency

Users should understand how AI systems work and how decisions are made.

✔ Privacy & Security

User data must be protected and handled responsibly.

✔ Human-Centric Design

AI should augment human capabilities, not replace or harm them.


Building Responsible Generative AI

To ensure ethical AI usage, organizations and developers can adopt the following practices:

  • Establish AI governance frameworks
  • Regularly audit models for bias and fairness
  • Use Explainable AI (XAI) techniques
  • Implement strong data protection policies
  • Encourage human oversight in decision-making

Courses and training programs emphasize the importance of validating AI outputs and designing systems that reduce risks while maximizing benefits.


The Future of Responsible AI

As generative AI continues to evolve, responsible practices will become even more critical. Governments, organizations, and individuals must collaborate to create ethical standards and regulations.

Responsible AI is not just a technical requirement—it is a societal necessity. It ensures that innovation benefits everyone while minimizing harm.


Join Now: Responsible AI in the Generative AI Era

Conclusion

The generative AI revolution is reshaping the world—but its success depends on how responsibly we use it. By embracing ethical principles and prioritizing transparency, fairness, and accountability, we can build AI systems that truly serve humanity.

Responsible AI is not optional—it is the foundation of a sustainable and trustworthy AI-driven future.

Data Scientist Career Guide and Interview Preparation

 


In today’s data-driven world, the role of a data scientist has become one of the most sought-after careers. Organizations rely on data scientists to uncover insights, build predictive models, and drive strategic decisions. However, breaking into this field requires more than just technical knowledge—it demands career planning, portfolio building, and strong interview preparation.

The Coursera course Data Scientist Career Guide and Interview Preparation provides a structured roadmap to help aspiring professionals navigate this journey successfully.


Understanding the Role of a Data Scientist

A data scientist combines skills from statistics, programming, and domain expertise to extract meaningful insights from data. The course emphasizes exploring:

  • Career paths in data science
  • Industry opportunities
  • Core responsibilities of a data scientist

Understanding these fundamentals helps candidates align their skills with industry expectations and choose the right specialization.


Building a Strong Foundation

Before applying for jobs, it’s essential to prepare strategically. The course highlights key steps such as:

1. Resume and Portfolio Development

A strong resume and portfolio are crucial for showcasing your skills. Candidates are encouraged to:

  • Highlight real-world projects
  • Demonstrate problem-solving abilities
  • Include GitHub or project links

Creating a portfolio helps employers evaluate your practical experience beyond theoretical knowledge.

2. Crafting Your Personal Brand

Building a personal brand through platforms like LinkedIn and networking is essential. It increases visibility and opens doors to job opportunities.

3. Elevator Pitch

Being able to clearly explain your skills and goals in a short pitch can make a lasting impression during networking and interviews.


Job Search Strategy

The course teaches candidates how to approach job searching effectively:

  • Research job listings and company requirements
  • Identify roles that match your skills
  • Tailor applications for each position

A focused job search ensures that you apply to roles where you have the highest chance of success.


Interview Preparation: What to Expect

Data science interviews are multi-stage processes designed to test both technical and soft skills.

Common Interview Stages

  • Recruiter screening
  • Technical assessments (coding, statistics, ML)
  • Case studies or take-home assignments
  • Behavioral interviews

Key Skills Evaluated

  • Programming (Python/R)
  • SQL and data manipulation
  • Machine learning concepts
  • Statistical reasoning
  • Communication and business understanding

Tips to Ace Data Science Interviews

✔ Research the Company

Understanding the company’s goals and culture helps tailor your answers effectively.

✔ Practice Common Questions

Rehearse technical and behavioral questions to build confidence.

✔ Communicate Clearly

Employers value candidates who can explain complex insights in simple terms.

✔ Showcase Real Impact

Focus on how your work created measurable business value.

✔ Ask Thoughtful Questions

Engaging with interviewers shows curiosity and genuine interest in the role.


Networking and Career Growth

Networking plays a critical role in landing a job. The course emphasizes:

  • Building professional connections
  • Leveraging referrals
  • Participating in data science communities

These strategies can significantly increase your chances of securing interviews and job offers.


Join Now: Data Scientist Career Guide and Interview Preparation

Conclusion

Becoming a data scientist is not just about mastering algorithms—it’s about strategic career planning, continuous learning, and effective communication.

The Data Scientist Career Guide and Interview Preparation course provides a comprehensive roadmap—from building your resume to acing interviews—helping you transition from a learner to a job-ready professional.

With the right preparation and mindset, you can successfully navigate the competitive data science job market and build a rewarding career.


๐Ÿš€ Day 32/150 – Reverse a Number in Python

 

๐Ÿš€ Day 32/150 – Reverse a Number in Python

Reversing a number means changing the order of its digits from back to front.
Example: 12345 → 54321

Let’s explore different ways to reverse a number in Python ๐Ÿ‘‡

๐Ÿ”น Method 1 – Using while Loop

n = 12345 rev = 0 while n > 0: digit = n % 10 rev = rev * 10 + digit n //= 10 print("Reversed Number:", rev)




✅ Best numeric method.

๐Ÿ”น Method 2 – Taking User Input

n = int(input("Enter a number: ")) rev = 0 while n > 0: digit = n % 10 rev = rev * 10 + digit n //= 10 print("Reversed Number:", rev)




✅ Useful for dynamic programs.

๐Ÿ”น Method 3 – Using String Slicing

n = 12345 rev = str(n)[::-1] print("Reversed Number:", rev)



Shortest and easiest method.

๐Ÿ”น Method 4 – Using Recursion

def reverse_num(n, rev=0): if n == 0: return rev return reverse_num(n // 10, rev * 10 + n % 10) print(reverse_num(12345))







✅ Great for learning recursion.

๐Ÿ“Œ Example Output

For 12345

o/p:54321

๐ŸŽฏ Best Method?

while loop → best for logic building
string slicing → fastest to write
recursion → concept learning















Popular Posts

Categories

100 Python Programs for Beginner (119) AI (252) Android (25) AngularJS (1) Api (7) Assembly Language (2) aws (29) Azure (10) BI (10) Books (262) Bootcamp (11) C (78) C# (12) C++ (83) Course (87) Coursera (300) Cybersecurity (30) data (5) Data Analysis (32) Data Analytics (22) data management (15) Data Science (351) Data Strucures (17) Deep Learning (158) Django (16) Downloads (3) edx (21) Engineering (15) Euron (30) Events (7) Excel (19) Finance (10) flask (4) flutter (1) FPL (17) Generative AI (72) Git (10) Google (51) Hadoop (3) HTML Quiz (1) HTML&CSS (48) IBM (42) IoT (3) IS (25) Java (99) Leet Code (4) Machine Learning (291) Meta (24) MICHIGAN (5) microsoft (11) Nvidia (8) Pandas (14) PHP (20) Projects (32) pytho (1) Python (1325) Python Coding Challenge (1132) Python Mistakes (51) Python Quiz (489) Python Tips (5) Questions (3) R (72) React (7) Scripting (3) security (4) Selenium Webdriver (4) Software (19) SQL (49) Udemy (18) UX Research (1) web application (11) Web development (8) web scraping (3)

Followers

Python Coding for Kids ( Free Demo for Everyone)