Tuesday, 28 April 2026

Python: Basic to Advanced Syllabus

Basics

1. Introduction to Python

  • What is Python

  • Installation & setup

  • Writing and running first program

  • Python syntax basics

2. Variables & Data Types

  • Variables and naming rules

  • Built-in data types:

    • int, float, complex

    • str

    • bool

  • Type conversion (casting)

3. Operators

  • Arithmetic operators

  • Comparison operators

  • Logical operators

  • Assignment operators

  • Identity operators (is, is not)

  • Membership operators (in, not in)

4. Input and Output

  • input() function

  • Output formatting

    • f-strings

    • .format()

5. Control Flow

  • if, elif, else

  • Nested conditions

  • match-case (Python 3.10+)


6. Loops

  • for loop

  • while loop

  • Loop control statements:

    • break

    • continue

    • pass

Data Structures & Strings

7. Strings

  • String creation

  • Indexing & slicing

  • String methods

8. Lists

  • Creating lists

  • Indexing & slicing

  • List methods

  • Nested lists

9. Tuples

  • Tuple creation

  • Immutability

  • Packing & unpacking

10. Sets

  • Set operations

  • Methods

11. Dictionaries

  • Key-value pairs

  • Dictionary methods

  • Nested dictionaries

Functions & Modularity

12. Functions

  • Defining and calling functions

  • Parameters and arguments

  • Return values

13. Advanced Function Concepts

  • Default arguments

  • Keyword arguments

  • Variable-length arguments (*args, **kwargs)

  • Lambda functions

  • Recursion

14. Modules

  • Importing modules

  • Built-in modules

  • Creating custom modules

15. Packages

  • Package structure

  • __init__.py

Object-Oriented Programming

16. Classes and Objects

  • Defining classes

  • Creating objects

  • __init__ method

  • Instance attributes

17. OOP Concepts

  • Encapsulation

  • Abstraction

  • Inheritance

  • Polymorphism

18. Advanced OOP

  • Class variables vs instance variables

  • Class methods and static methods

  • Dunder (magic) methods

Advanced Python Concepts

19. Iterators

  • Iterable vs iterator

  • __iter__() and __next__()

20. Generators

  • yield keyword

  • Generator functions

21. Comprehensions

  • List comprehensions

  • Dictionary comprehensions

  • Set comprehensions

22. Decorators

  • Function decorators

  • Nested functions

Error Handling & File Handling

23. Exception Handling

  • Errors vs exceptions

  • try, except

  • Multiple exceptions

  • else, finally

  • Raising exceptions

24. File Handling

  • Opening files (open)

  • Modes (r, w, a, etc.)

  • Reading and writing files

  • Working with text files

Testing & Debugging Basics

25. Debugging

  • Common errors

  • Debugging techniques

26. Testing

  • Basic unit testing concepts



Python Coding challenge - Day 1141| What is the output of the following Python Code?

 


Code Explanation:

๐Ÿ”น 1. Importing Module
import copy
✅ Explanation:
Imports Python’s built-in copy module.
This module provides:
copy.copy() → shallow copy
copy.deepcopy() → deep copy

๐Ÿ”น 2. Creating Nested List
a = [[1,2],[3,4]]
✅ Explanation:
a is a list of lists (nested structure).
Memory structure:
Outer list contains two inner lists
a → [ [1,2], [3,4] ]

๐Ÿ”น 3. Deep Copy
b = copy.deepcopy(a)
✅ Explanation:
Creates a completely independent copy of a.
Both:
Outer list
Inner lists
are copied separately.
๐Ÿ” Important:
b ≠ a
b[0] ≠ a[0]

✔️ No shared references

๐Ÿ”น 4. Modifying Copied List
b[0][0] = 100
✅ Explanation:
Changes first element of first inner list in b
So:
b → [ [100,2], [3,4] ]

๐Ÿ”น 5. Printing Original List
print(a)
✅ Explanation:
Since a and b are completely independent,
Changes in b do NOT affect a

๐ŸŽฏ Final Output
[[1, 2], [3, 4]]

Python Coding challenge - Day 1140| What is the output of the following Python Code?

 


 Code Explanation:

๐Ÿ”น 1. Function Definition
def gen():
✅ Explanation:
A function gen is defined.
Because it uses yield, it becomes a generator function (not a normal function).


๐Ÿ”น 2. First yield
yield 1
✅ Explanation:
yield produces a value without ending the function.
It pauses execution and remembers its state.

๐Ÿ”น 3. Second yield
yield 2
✅ Explanation:
When resumed, the function continues from where it stopped.
Now it yields 2.

๐Ÿ”น 4. Third yield
yield 3
✅ Explanation:
On next resume, it yields 3.
After this, the generator is exhausted.

๐Ÿ”น 5. Creating Generator Object
g = gen()
✅ Explanation:
Calling gen() does NOT execute the function immediately.
It returns a generator object.
Execution starts only when next() is called.

๐Ÿ”น 6. First next() Call
print(next(g))
๐Ÿ” What happens:
Starts execution of gen()
Runs until first yield
✔️ Output:
1

๐Ÿ”น 7. Second next() Call
print(next(g))
๐Ÿ” What happens:
Resumes from previous pause
Continues to second yield

✔️ Output:
2

๐ŸŽฏ Final Output
1
2

Python Coding Challenge - Question with Answer (ID -280426)

 


Explanation:

1. Creating a Dictionary
d = {"a": 1}
Here, a dictionary named d is created.
A dictionary in Python stores data in key–value pairs.
In this case:
Key: "a"
Value: 1

So the dictionary looks like:

{"a": 1}

2. Using the .get() Method
print(d.get("b", 2))
➤ What .get() does:
The .get() method is used to retrieve the value of a key from a dictionary.

Syntax:

dictionary.get(key, default_value)
➤ In this example:
"b" is the key we are trying to access.
2 is the default value.

3. Key Lookup Behavior
The dictionary d does NOT contain the key "b".
Instead of throwing an error, .get():
Returns the default value, which is 2.

4. Output
2

Book: Python for Ethical Hacking Tools, Libraries, and Real-World Applications

April Python Bootcamp Day 17

 

Day 17: Web Scraping using Python

What is Web Scraping?

Web scraping is the process of extracting data from websites automatically using code instead of manually copying it.

It helps in:

  • Data collection
  • Automation
  • Building datasets
  • Market research

Tools Required

1. requests

  • Used to fetch webpage or API data
  • Works with HTTP requests

2. BeautifulSoup

  • Parses HTML content
  • Helps extract specific elements like headings, links, tables

Data Flow Understanding

HTML Scraping Flow:

Website → HTML → BeautifulSoup → Data

API Data Flow:

API Endpoint → JSON → Python → Data

Sample HTML File (index.html)

<!DOCTYPE html>
<html>
<head>
<title>Sample Webpage</title>
</head>
<body>

<h1>Main Heading</h1>
<h2>Sub Heading</h2>
<h3>Section Heading</h3>

<p>This is a paragraph about web scraping.</p>
<p>Python makes scraping easy using BeautifulSoup.</p>

<a href="https://www.google.com">Google</a><br>
<a href="https://www.github.com">GitHub</a>

<h2>Student Table</h2>
<table border="1">
<tr>
<th>Name</th>
<th>Age</th>
<th>City</th>
<th>Email</th>
</tr>
<tr>
<td>Piyush</td>
<td>21</td>
<td>Nagpur</td>
<td>piyush@example.com</td>
</tr>
<tr>
<td>Rahul</td>
<td>22</td>
<td>Pune</td>
<td>Rahul@gmail.com</td>
</tr>
</table>

</body>
</html>

Web Scraping using BeautifulSoup

from bs4 import BeautifulSoup

with open("index.html", "r", encoding="utf-8") as file:
html_content = file.read()

soup = BeautifulSoup(html_content, "html.parser")

# 1. Title
print(f"Title: {soup.title.text}")

# 2. Headings
for tag in ["h1", "h2", "h3"]:
for heading in soup.find_all(tag):
print(f"{tag.upper()}: {heading.text}")

# 3. Paragraphs
for p in soup.find_all("p"):
print(p.text)

# 4. Links
for a in soup.find_all("a"):
print(f"Text: {a.get_text()}, URL: {a.get('href')}")

# 5. Table Data
table = soup.find("table")
rows = table.find_all("tr")

for row in rows:
cols = row.find_all(["td", "th"])
data = [col.text.strip() for col in cols]
print(data)

# 6. Extract all text
print(soup.get_text(separator="\n"))

Web Scraping using APIs (JSON Data)

import requests

url = "https://jsonplaceholder.typicode.com/posts"

response = requests.get(url)

if response.status_code == 200:
data = response.json()

for post in data[:5]:
print(f"Title: {post['title']}")
print(f"Body: {post['body']}")
else:
print("Error:", response.status_code)

Advanced Example: Fetch API Data and Save to CSV

import requests
import csv

def fetch_api_data(url):
headers = {
"User-Agent": "Mozilla/5.0"
}
try:
response = requests.get(url, headers=headers)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print("Error:", e)
return None

url = "https://jsonplaceholder.typicode.com/users"
data = fetch_api_data(url)

if data:
for user in data:
print(f"{user['name']} - {user['email']}")

with open("HR.csv", "w", newline="", encoding="utf-8") as file:
writer = csv.writer(file)
writer.writerow(['Name', 'Email'])

for user in data:
writer.writerow([user['name'], user['email']])

Key Concepts Learned

  • Difference between HTML scraping and API scraping
  • How to parse HTML using BeautifulSoup
  • Extracting headings, paragraphs, links, and tables
  • Fetching JSON data using requests
  • Saving extracted data into CSV files

Best Practices for Web Scraping

  • Always check website permissions (robots.txt)
  • Avoid sending too many requests (rate limiting)
  • Use headers like User-Agent
  • Prefer APIs over HTML scraping when available

Summary

Web scraping is a powerful skill for automating data extraction.
Using BeautifulSoup, you can extract structured data from HTML, while requests helps fetch data from APIs efficiently.


Assignment Questions

Theory-Based

  1. What is web scraping? Explain with an example.
  2. Difference between web scraping and API data fetching.
  3. What is BeautifulSoup used for?
  4. Why is JSON preferred in APIs?
  5. What are headers in HTTP requests and why are they used?

Practical Questions

  1. Extract all:
    • Headings (h1, h2, h3)
    • Paragraphs
    • Links
      from the given HTML file.
  2. Modify the code to:
    • Extract only emails from the table.
  3. Scrape:
    • Only table data and convert it into a list of dictionaries.
  4. Fetch data from:

    https://jsonplaceholder.typicode.com/comments
    • Print name and email of first 10 users.
  5. Save API data into a CSV file with:
    • ID, Name, Email

Challenge Tasks

  1. Build a scraper that:
    • Extracts all links from a webpage
    • Saves them into a text file
  2. Create a script that:
    • Scrapes table data
    • Converts it into JSON format
  3. Combine both:
    • Scrape HTML data
    • Store it in CSV
    • Also fetch API data and merge both datasets
  4. Add error handling:
    • Handle missing tags
    • Handle request failures


Popular Posts

Categories

100 Python Programs for Beginner (119) AI (254) Android (25) AngularJS (1) Api (7) Assembly Language (2) aws (29) Azure (10) BI (10) Books (262) Bootcamp (11) C (78) C# (12) C++ (83) Course (87) Coursera (300) Cybersecurity (30) data (5) Data Analysis (32) Data Analytics (22) data management (15) Data Science (353) Data Strucures (17) Deep Learning (158) Django (16) Downloads (3) edx (21) Engineering (15) Euron (30) Events (7) Excel (19) Finance (10) flask (4) flutter (1) FPL (17) Generative AI (72) Git (10) Google (51) Hadoop (3) HTML Quiz (1) HTML&CSS (48) IBM (42) IoT (3) IS (25) Java (99) Leet Code (4) Machine Learning (292) Meta (24) MICHIGAN (5) microsoft (11) Nvidia (8) Pandas (14) PHP (20) Projects (32) pytho (1) Python (1329) Python Coding Challenge (1132) Python Mistakes (51) Python Quiz (490) Python Tips (5) Questions (3) R (72) React (7) Scripting (3) security (4) Selenium Webdriver (4) Software (19) SQL (49) Udemy (18) UX Research (1) web application (11) Web development (8) web scraping (3)

Followers

Python Coding for Kids ( Free Demo for Everyone)