Friday, 10 April 2026
Wednesday, 11 February 2026
Day 50: Thinking Python Is Slow (Without Context)
Here’s a strong Day 50 finale for your series ๐
๐ Python Mistakes Everyone Makes ❌
Day 50: Thinking Python Is Slow (Without Context)
This is one of the most common misconceptions about Python — and also one of the most misleading.
❌ The Mistake
Blaming Python whenever code runs slowly.
for i in range(10_000_000):
total += i
“Python is slow” becomes the conclusion — without asking why.
❌ Why This Thinking Fails
Python is interpreted, not compiled
Pure Python loops are slower than C-level loops
Wrong tools are used for the problem
Performance bottlenecks are misunderstood
No profiling is done before judging
Speed depends on how you use Python, not just the language itself.
✅ The Correct Perspective
Python is:
๐ Fast for development
⚡ Fast when using optimized libraries
๐ง Designed for productivity and clarity
Most “fast Python” code actually runs in C underneath.
import numpy as np
arr = np.arange(10_000_000)total = arr.sum() # ✅ runs in optimized C
๐ง Where Python Is Fast
Data science (NumPy, Pandas)
Web backends
Automation & scripting
Machine learning
Glue code connecting systems
๐ง Where Python Is Not Ideal
Tight CPU-bound loops
Real-time systems
Extremely low-latency tasks
And that’s okay no language is perfect everywhere.
๐ง Simple Rule to Remember
๐ง Profile before judging speed
๐ง Use the right tools and libraries
๐ง Python is slow only when misused
๐ Final Takeaway
Python isn’t slow
bad assumptions are.
Use Python for what it’s great at.
Optimize when needed.
And choose the right tool for the job.
๐ Congratulations!
You’ve completed 50 Days of Python Mistakes Everyone Makes ๐๐ฅ
Day 49: Writing Code Without Tests
๐ Python Mistakes Everyone Makes ❌
Day 49: Writing Code Without Tests
❌ The Mistake
Writing features and logic without any tests and assuming the code “just works”.
def add(a, b):
return a + b
Looks fine… until it’s used incorrectly.
❌ What Goes Wrong?
Bugs go unnoticed
Changes break existing functionality
Fear of refactoring
Manual testing becomes repetitive
Production issues appear unexpectedly
✅ The Correct Way
Write simple tests to verify behavior.
def test_add():
assert add(2, 3) == 5 assert add(-1, 1) == 0
Even basic tests catch problems early.
❌ Why This Fails? (Main Points)
No safety net for changes
Bugs discovered too late
Hard to refactor confidently
Code behavior is undocumented
Debugging takes longer
๐ง Simple Rule to Remember
๐ง If it’s important, test it
๐ง Tests are documentation
๐ง Untested code is broken code waiting to happen
๐ Final Takeaway
Tests don’t slow you down —
they save time, prevent regressions, and build confidence.
Even one test is better than none.
Saturday, 7 February 2026
Day 48: Overusing try-except Instead of Validation
Here’s Day 48 in the same clean blog format you’ve been using ๐
๐ Python Mistakes Everyone Makes ❌
Day 48: Overusing try-except Instead of Validation
❌ The Mistake
Using try-except to control normal program flow instead of validating input first.
def get_age(value):
try:
return int(value)
except ValueError:
return 0 # handle invalid number input only
❌ What’s Wrong Here?
Catches all exceptions, even unexpected ones
Hides bugs (e.g. None, objects, or logic errors)
Makes debugging harder
Uses exceptions for normal logic, not errors
Slower than simple checks
✅ The Correct Way
Validate input before converting.
def get_age(value):if isinstance(value, str) and value.isdigit():return int(value)
return 0
✔ Use try-except Only for Truly Exceptional Cases
def read_number(value):try:return int(value)except ValueError:
return 0 # ✅ specific exception
❌ Why This Fails? (Main Points)
Exceptions are for unexpected errors
Overusing them hides real issues
Broad except: masks bugs
Debugging becomes painful
Code intent becomes unclear
๐ง Simple Rule to Remember
๐ง Validate when you expect failure
๐ง Catch exceptions only when something unexpected can happen
๐ง Never use except: unless you re-raise
๐ Final Takeaway
try-except is powerful — but dangerous when abused.
Good code:
Predicts errors
Validates inputs
Handles only what it expects
Bad code:
Catches everything
Hopes nothing breaks
Day 47:Ignoring memory leaks in long-running apps
Got it — you want the same format as earlier days: Mistake → Correct Way, clearly shown.
๐ Python Mistakes Everyone Makes ❌
Day 47: Ignoring Memory Leaks in Long-Running Apps
❌ The Mistake
Keeping references alive forever in a long-running process.
# Memory leak examplecache = []def process(data):cache.append(data) # ❌ cache grows foreverwhile True:
process("some large data")❌ What’s wrong here?
cache is global
It keeps growing
Garbage collector cannot free memory
Memory usage increases endlessly
In servers, workers, or background jobs, this will eventually crash the app.
✅ The Correct Way
Use bounded data structures or explicitly clean up memory.
✔ Option 1: Limit cache size
from collections import dequecache = deque(maxlen=1000) # ✅ fixed sizedef process(data):cache.append(data)while True:
process("some large data")✔ Option 2: Explicit cleanup
def process(data):✔ Option 3: Use weak references (advanced)
import weakrefclass Data:passcache = weakref.WeakSet()d = Data()
cache.add(d)
Objects are removed automatically when no strong references exist.
❌ Why This Fails? (Main Points)
Python only frees memory when references disappear
Global variables live forever
Unbounded caches slowly eat RAM
Garbage collection timing is unpredictable
Long-running apps amplify small leaks
๐ง Simple Rule to Remember
๐ง If something is referenced, it stays in memory
๐ง Long-running apps must manage memory explicitly
๐ง Always limit caches and clean resources
๐ Final Takeaway
Python won’t save you from memory leaks.
Short scripts finish fast.
Servers don’t.
If your app runs forever — your mistakes will show up eventually.
Friday, 6 February 2026
Day 46:Misusing @staticmethod
๐ Python Mistakes Everyone Makes ❌
Day 46: Misusing @staticmethod
@staticmethod looks clean and convenient—but using it incorrectly can make your code harder to understand and maintain.
❌ The Mistake
Using @staticmethod when the method actually depends on class or instance data.
class User:
role = "admin"@staticmethoddef is_admin():
return role == "admin" # ❌ role is undefined
This fails because static methods don’t have access to self or cls.
❌ Why This Fails
@staticmethod receives no implicit arguments
Cannot access instance (self) data
Cannot access class (cls) data
Often hides the method’s real dependency
Leads to confusing or broken logic
If a method needs data from the object or class, it should not be static.
✅ The Correct Way
✔️ Use @classmethod for class-level logic
class User:
role = "admin"
@classmethoddef is_admin(cls):
return cls.role == "admin"
✔️ Use instance methods when object state matters
class User:def __init__(self, role):self.role = roledef is_admin(self):
return self.role == "admin"
✔️ Use @staticmethod only when truly independent
class MathUtils:@staticmethoddef add(a, b):
return a + b
No class state. No instance state. Pure logic.
๐ง Simple Rule to Remember
๐ Needs self → instance method
๐ Needs cls → class method
๐ Needs neither → static method
๐ Final Takeaway
@staticmethod is not “better” — it’s just different.
Use it only when:
The method is logically related to the class
It does not depend on object or class state
Clarity beats cleverness every time.
Day 45:Not profiling before optimizing
๐ Python Mistakes Everyone Makes ❌
Day 45: Not Profiling Before Optimizing
One of the biggest performance mistakes is trying to optimize code without knowing where the real problem is.
❌ The Mistake
Optimizing code based on guesses.
# Premature optimizationdata = []for i in range(100000):
data.append(i * 2)
You might refactor this endlessly — but it may not even be the slow part.
❌ Why This Fails
You optimize the wrong code
Waste time on non-critical paths
Increase code complexity unnecessarily
Miss the actual performance bottleneck
Can even make performance worse
Guessing is not optimization.
✅ The Correct Way
Profile first. Then optimize only what matters.
import cProfiledef work():data = []for i in range(100000):data.append(i * 2)
cProfile.run("work()")This shows:
Which functions are slow
How often they’re called
Where time is really spent
๐ง Common Profiling Tools
cProfile — built-in, reliable
timeit — for small code snippets
line_profiler — line-by-line analysis
perf / py-spy — production profiling
๐ง Simple Rule to Remember
๐ Measure first, optimize later
๐ Fix bottlenecks, not guesses
๐ Final Takeaway
Fast code isn’t about clever tricks — it’s about informed decisions.
Before rewriting anything, ask one question:
๐ Do I know what’s actually slow?
Profile. Then optimize.
Wednesday, 4 February 2026
Day 44: Using Threads for CPU-Bound Tasks
๐ Python Mistakes Everyone Makes ❌
Day 44: Using Threads for CPU-Bound Tasks
Threads in Python feel like the obvious way to make programs faster.
But when it comes to CPU-bound work, threads often do the opposite.
❌ The Mistake
Using threads to speed up heavy computation.
import threadingdef work():total = 0for i in range(10_000_000):total += ithreads = [threading.Thread(target=work) for _ in range(4)]for t in threads:t.start()for t in threads:
t.join()
This looks parallel — but it isn’t.
❌ Why This Fails
Python has a Global Interpreter Lock (GIL)
Only one thread runs Python bytecode at a time
CPU-bound threads cannot execute in parallel
Thread context-switching adds overhead
Performance may be worse than single-threaded code
๐ง What Threads Are Actually Good For
Threads work well for:
Network requests
File I/O
Waiting on external resources
They are not meant for heavy computation.
✅ The Correct Way
Use multiprocessing for CPU-bound tasks.from multiprocessing import Pooldef work(n):total = 0
for i in range(n):total += ireturn totalif __name__ == "__main__":with Pool(4) as p:
p.map(work, [10_000_000] * 4)
Each process:
Has its own Python interpreter
Has its own GIL
Runs truly in parallel on multiple cores
๐ง Simple Rule to Remember
๐ Threads for I/O-bound work
๐ Processes for CPU-bound work
๐ Final Takeaway
If your program is doing heavy computation, threads won’t save you.
Understanding the GIL helps you choose the right tool — and avoid wasted effort.
Write smarter, faster Python ๐⚡
Friday, 30 January 2026
Day 43: Mutating Arguments Passed to Functions
๐ Python Mistakes Everyone Makes ❌
Day 43: Mutating Arguments Passed to Functions
This is one of those bugs that looks harmless, works fine in small tests —
and then causes mysterious behavior later in production.
❌ The Mistake
Modifying a mutable argument (like a list or dictionary) inside a function.
def add_item(items):items.append("apple") # ❌ mutates the caller's listmy_list = []add_item(my_list)
print(my_list) # ['apple']
At first glance, this seems fine.
But the function silently changes data it does not own.
❌ Why This Is Dangerous
❌ Side effects are hidden
❌ Makes debugging extremely hard
❌ Breaks assumptions about data immutability
❌ Functions stop being predictable
❌ Reusing the function becomes risky
The caller didn’t explicitly ask for the list to be modified — but it happened anyway.
⚠️ A More Subtle Example
def process(data):
data["count"] += 1 # ❌ mutates shared state
If data is shared across multiple parts of your app, this change ripples everywhere.
✅ The Correct Way: Avoid Mutation
✔️ Option 1: Work on a copy
def add_item(items):new_items = items.copy()new_items.append("apple")return new_itemsmy_list = []
my_list = add_item(my_list)
Now the function is pure and predictable.
✔️ Option 2: Be explicit about mutation
If mutation is intentional, make it obvious:
def add_item_in_place(items):
items.append("apple")Clear naming prevents surprises.
๐ง Why This Matters
Functions should:
Do one thing
Have clear contracts
Avoid unexpected side effects
Predictable code is maintainable code.
๐ง Simple Rule to Remember
๐ง If a function mutates its arguments, make it explicit or avoid it.
When in doubt:
Return new data instead of modifying input.
๐ Final Takeaway
Hidden mutation is one of Python’s most common foot-guns.
Write functions that:
Are safe to reuse
Don’t surprise callers
Make data flow obvious
Your future self (and teammates) will thank you.
Day 42:Not using __slots__ when needed
๐ Python Mistakes Everyone Makes ❌
Day 42: Not Using __slots__ When Needed
Python classes are flexible by default—but that flexibility comes with a cost. When you create many objects, not using __slots__ can silently waste memory and reduce performance.
❌ The Mistake
Defining classes without __slots__ when you know exactly which attributes the objects will have.
class Point:def __init__(self, x, y):self.x = x
self.y = y
This looks perfectly fine—but every instance gets a __dict__ to store attributes dynamically.
❌ Why This Fails
Each object stores attributes in a __dict__
Extra memory overhead per instance
Slower attribute access
Allows accidental creation of new attributes
Becomes expensive when creating thousands or millions of objects
This usually goes unnoticed until performance or memory becomes a problem.
✅ The Correct Way
Use __slots__ when:
Object structure is fixed
You care about memory or speed
You’re creating many instances
class Point:__slots__ = ("x", "y")def __init__(self, x, y):self.x = x
self.y = y
✅ What __slots__ Gives You
๐ Lower memory usage
⚡ Faster attribute access
๐ Prevents accidental attributes
๐ง Clear object structure
p = Point(1, 2)
p.z = 3 # ❌ AttributeError
This is a feature, not a limitation.
๐ง When NOT to Use __slots__
When objects need dynamic attributes
When subclassing extensively (needs extra care)
When simplicity matters more than optimization
๐ง Simple Rule to Remember
๐ Many objects + fixed attributes → use __slots__
๐ Few objects or flexible design → skip it
๐ Final Takeaway
__slots__ is not mandatory—but it’s powerful when used correctly.
Use it when:
Performance matters
Memory matters
Object structure is predictable
Write Python that’s not just correct—but efficient too.
Wednesday, 28 January 2026
Day 35: Assuming __del__ Runs Immediately
๐ Python Mistakes Everyone Makes ❌
Day 35: Assuming __del__ Runs Immediately
Python’s __del__ method looks like a destructor, so it’s easy to assume it behaves like one in languages such as C++ or Java.
But this assumption can lead to unpredictable bugs and resource leaks.
❌ The Mistake
Relying on __del__ to clean up resources immediately when an object is “deleted”.
class Demo:def __del__(self):print("Object deleted")obj = Demo()
obj = None # ❌ assuming __del__ runs here
You might expect "Object deleted" to print right away — but that is not guaranteed.
❌ Why This Fails
__del__ is called only when an object is garbage collected
Garbage collection timing is not guaranteed
Other references to the object may still exist
Circular references can delay or prevent __del__
Behavior can vary across Python implementations (CPython, PyPy, etc.)
In short:
๐ Deleting a reference ≠ deleting the object
⚠️ Real-World Problems This Causes
Files left open
Database connections not closed
Network sockets hanging
Memory leaks
Inconsistent behavior across environments
These bugs are often hard to detect and harder to debug.
✅ The Correct Way
Always clean up resources explicitly.
class Resource:def close(self):print("Resource released")r = Resource()try:print("Using resource")finally:
r.close() # ✅ guaranteed cleanup
This ensures cleanup no matter what happens.
๐ง Even Better: Use with
When possible, use context managers:
with open("data.txt") as f:
data = f.read()# file is safely closed here
Python guarantees cleanup when exiting the with block.
๐ซ Why __del__ Is Dangerous for Cleanup
Order of destruction is unpredictable
May never run before program exits
Errors inside __del__ are ignored
Makes code fragile and hard to reason about
__del__ is for last-resort cleanup, not critical resource management.
๐ง Simple Rule to Remember
๐ Never rely on __del__ for important cleanup
๐ Use with or explicit cleanup methods instead
๐ Final Takeaway
If a resource must be released, don’t wait for Python to decide when.
Be explicit.
Be predictable.
Write safer Python.
Day 41:Writing unreadable one-liners
๐ Python Mistakes Everyone Makes ❌
Day 41: Writing Unreadable One-Liners
Python allows powerful one-liners — but just because you can write them doesn’t mean you should.
Unreadable one-liners are a common mistake that hurts maintainability and clarity.
❌ The Mistake
Cramming too much logic into a single line.
result = [x * 2 for x in data if x > 0 and x % 2 == 0 and x < 100]Or worse:
total = sum(map(lambda x: x*x, filter(lambda x: x % 2 == 0, nums)))It works — but at what cost?
❌ Why This Fails
Hard to read
Hard to debug
Hard to modify
Logic is hidden inside expressions
New readers struggle to understand intent
Readable code matters more than clever code.
✅ The Correct Way
Break logic into clear, readable steps.
filtered = []for x in data:if x > 0 and x % 2 == 0 and x < 100:filtered.append(x * 2)
result = filtered
Or a clean list comprehension:
result = [x * 2for x in dataif x > 0if x % 2 == 0if x < 100
]
Readable ≠ longer.
Readable = clearer.
๐ง Why Readability Wins
Python emphasizes readability
Future-you will thank present-you
Code is read more often than written
Easier debugging and collaboration
Even Guido van Rossum agrees ๐
๐ง Simple Rule to Remember
๐ If it needs a comment, split it
๐ Clarity > cleverness
๐ Write code for humans first, computers second
๐ Final Takeaway
One-liners are tools — not flexes.
If your code makes readers pause and squint, it’s time to refactor.
Clean Python is readable Python. ๐✨
Monday, 26 January 2026
Day 40:Overusing inheritance instead of composition
๐ Python Mistakes Everyone Makes ❌
Day 40: Overusing Inheritance Instead of Composition
Inheritance is one of the first OOP concepts most Python developers learn.
And that’s exactly why it’s often overused.
Just because you can inherit from a class doesn’t mean you should.
❌ The Mistake
Using inheritance when the relationship is not truly “is-a”.
class Engine:def start(self):print("Engine started")class Car(Engine): # ❌ Car is NOT an Enginedef drive(self):
print("Car is driving")This design implies:
A Car is an Engine
Which is logically incorrect.
❌ Why This Fails
❌ Creates tight coupling
❌ Makes the code harder to change later
❌ Breaks real-world modeling
❌ Leads to fragile inheritance hierarchies
❌ Prevents reusing components independently
If Engine changes, Car breaks.
✅ The Correct Way: Composition
Use composition when an object has another object.
class Engine:def start(self):print("Engine started")class Car:def __init__(self):self.engine = Engine() # ✅ Compositiondef drive(self):self.engine.start()
print("Car is driving")
Now:
A Car has an Engine
This is flexible, realistic, and scalable.
๐ง Why Composition Wins
Components can be swapped or upgraded
Code becomes modular
Easier testing and reuse
Fewer breaking changes
Cleaner mental model
Want an electric engine? Just replace it.
๐ง Simple Rule to Remember
๐งฉ Use inheritance for “is-a” relationships
๐ Use composition for “has-a” relationships
If it feels awkward to say out loud — it’s probably wrong in code too.
๐ Final Takeaway
Inheritance is powerful — but dangerous when misused.
Most real-world systems are built from parts, not types.
Favor composition first, and reach for inheritance only when the relationship is truly structural.
Good design is about flexibility, not clever hierarchies.
Day 38: Blocking I/O in async programs
๐ Python Mistakes Everyone Makes ❌
Day 38: Blocking I/O in Async Programs
Async programming in Python is powerful—but only if you follow the rules.
One of the most common mistakes is using blocking I/O inside async code, which silently kills performance.
❌ The Mistake
Using a blocking function like time.sleep() inside an async function.
import time
import asyncio async def task():time.sleep(2) # ❌ blocks the event loopprint("Done")
asyncio.run(task())
At first glance, this looks fine.
But under the hood, it breaks how async works.
❌ Why This Fails
time.sleep() blocks the event loop
While sleeping, no other async tasks can run
Async code becomes slow and sequential
No error is raised — just poor performance
This makes the bug easy to miss and hard to debug.
๐จ What’s Really Happening
Async programs rely on an event loop to switch between tasks efficiently.
Blocking calls stop the event loop entirely.
Result:
No concurrency
Wasted async benefits
Performance similar to synchronous code
✅ The Correct Way
Use non-blocking async alternatives like asyncio.sleep().
import asyncioasync def task():await asyncio.sleep(2)print("Done")
asyncio.run(task())
Why this works:
await pauses only the current task
The event loop stays responsive
Other async tasks can run in the meantime
๐ง Common Blocking Functions to Avoid in Async Code
- time.sleep()
Blocking file I/O
Blocking network calls
CPU-heavy computations
Use:
- asyncio.sleep()
Async libraries (aiohttp, aiofiles)
Executors for CPU-heavy work
๐ง Simple Rule to Remember
๐ Blocking calls freeze the event loop
๐ Use await, not blocking functions
๐ Never block the event loop in async code
๐ Final Takeaway
Async code only works when everything cooperates.
One blocking call can ruin your entire async design.
Write async code the async way —
Fast, non-blocking, and scalable.
Friday, 23 January 2026
Day 39: Ignoring GIL Assumptions
๐ Python Mistakes Everyone Makes ❌
Day 39: Ignoring GIL Assumptions
Python makes multithreading look easy — but under the hood, there’s a critical detail many developers overlook: the Global Interpreter Lock (GIL).
Ignoring it can lead to slower programs instead of faster ones.
❌ The Mistake
Using threads to speed up CPU-bound work.
import threadingdef work():total = 0for i in range(10_000_000):total += ithreads = [threading.Thread(target=work) for _ in range(4)]for t in threads:
t.start()
for t in threads: t.join()
This looks parallel — but it isn’t.
❌ Why This Fails
Python has a Global Interpreter Lock (GIL)
Only one thread executes Python bytecode at a time
CPU-bound tasks do not run in parallel
Threads add context-switching overhead
Performance can be worse than single-threaded code
๐ง What the GIL Really Means
Threads are great for I/O-bound tasks
Threads are bad for CPU-bound tasks
Multiple CPU cores ≠ parallel Python threads
The GIL protects memory safety, but limits CPU parallelism.
✅ The Correct Way
Use multiprocessing for CPU-bound work.
from multiprocessing import Pooldef work(n):total = 0for i in range(n):total += ireturn totalif __name__ == "__main__":with Pool(4) as p:
p.map(work, [10_000_000] * 4)
Why this works:
Each process has its own Python interpreter
No shared GIL
True parallel execution across CPU cores
๐ง When to Use What
| Task Type | Best Choice |
|---|---|
| I/O-bound (network, files) | threading, asyncio |
| CPU-bound (math, loops) | multiprocessing |
| Mixed workloads | Combine wisely |
๐ง Simple Rule to Remember
๐ Threads ≠ CPU parallelism in Python
๐ GIL blocks parallel bytecode execution
๐ Use multiprocessing for CPU-heavy tasks
๐ Final Takeaway
Threads won’t make CPU-heavy Python code faster.
Understanding the GIL helps you choose the right concurrency model — and avoid hidden performance traps.
Know the limits. Write smarter Python. ๐⚡
Wednesday, 21 January 2026
Day 37: Using eval() Unsafely
๐ Python Mistakes Everyone Makes ❌
Day 37: Using eval() Unsafely
eval() often looks like a quick and clever solution — you pass a string, and Python magically turns it into a result.
But this convenience comes with serious security risks.
❌ The Mistake
Using eval() directly on user input.
user_input = "2 + 3"
result = eval(user_input)print(result)
This works for simple math, but it also opens the door to executing arbitrary code.
❌ Why This Fails
eval() executes arbitrary Python code
Malicious input can run system commands
One unsafe input can compromise your entire program
Makes your application vulnerable to attacks
Example of dangerous input:
__import__("os").system("rm -rf /")If passed to eval(), this could execute system-level commands.
๐จ Why This Is So Dangerous
No sandboxing
Full access to Python runtime
Can read, write, or delete files
Can expose secrets or credentials
Even trusted-looking input can be manipulated.
✅ The Correct Way
If you need to parse basic Python literals, use ast.literal_eval().
import astuser_input = "[1, 2, 3]"result = ast.literal_eval(user_input)
print(result)
Why this is safer:
Only allows literals (strings, numbers, lists, dicts, tuples)
No function calls
No code execution
Raises an error for unsafe input
๐ง When to Avoid eval() Completely
User input
Web applications
Configuration parsing
Any untrusted source
In most cases, there is always a safer alternative.
๐ง Simple Rule to Remember
๐ eval() executes code, not just expressions
๐ Never use eval() on user input
๐ If you don’t fully trust the input — don’t use eval()
๐ Final Takeaway
eval() is powerful — and dangerous.
Using it without caution is like handing your program’s keys to strangers.
Choose safety.
Choose clarity.
Write secure Python.
Tuesday, 20 January 2026
Day 36: Misusing Decorators
๐ Python Mistakes Everyone Makes ❌
Day 36: Misusing Decorators
Decorators are powerful—but easy to misuse. A small mistake can change function behavior or break it silently.
❌ The Mistake
Forgetting to return the wrapped function’s result.
def my_decorator(func):def wrapper(*args, **kwargs):print("Before function")func(*args, **kwargs) # ❌ return missingprint("After function")
return wrapper
@my_decoratordef greet():return "Hello"
print(greet()) # None ๐
❌ Why This Fails
The wrapper does not return the function’s result
The original return value is lost
Function behavior changes unexpectedly
No error is raised — silent bug
✅ The Correct Way
def my_decorator(func):def wrapper(*args, **kwargs):print("Before function")result = func(*args, **kwargs)print("After function")return resultreturn wrapper
@my_decoratordef greet():return "Hello"
print(greet()) # Hello ✅
✔ Another Common Decorator Mistake
Not preserving metadata:
from functools import wrapsdef my_decorator(func):@wraps(func)def wrapper(*args, **kwargs):return func(*args, **kwargs)
return wrapper
๐ง Simple Rule to Remember
๐ Always return the wrapped function’s result
๐ Use functools.wraps
๐ Test decorators carefully
Decorators are powerful handle them with care ๐
Sunday, 18 January 2026
Day 34:Forgetting to call functions
๐ Python Mistakes Everyone Makes ❌
Day 34: Forgetting to Call Functions
This is one of the most common and sneaky Python mistakes, especially for beginners but it still trips up experienced developers during refactoring or debugging.
❌ The Mistake
Defining a function correctly… but forgetting to actually call it.
def greet():
print("Hello!")greet # ❌ function is NOT executedAt a glance, this looks fine.
But nothing happens.
❌ Why This Fails
greet refers to the function object
Without (), the function is never executed
Python does not raise an error
The program silently continues
This makes the bug easy to miss
You’ve created the function—but never told Python to run it.
✅ The Correct Way
Call the function using parentheses:
def greet():
print("Hello!")greet() # ✅ function is executed
Now Python knows you want to run the code inside the function.
๐ง What’s Really Happening
In Python:
Functions are first-class objects
You can pass them around, store them, or assign them
Writing greet just references the function
Writing greet() calls the function
This feature is powerful—but also the reason this mistake happens so often.
⚠️ Common Real-World Scenarios
1️⃣ Forgetting to call a function inside a loop
for _ in range(3):
greet # ❌ nothing happens
2️⃣ Forgetting parentheses in conditionals
if greet:
print("This always runs") # ❌ greet is truthy3️⃣ Returning a function instead of its result
def get_value():
return 42result = get_value # ❌ function, not value
✅ When NOT Using () Is Actually Correct
def greet():print("Hello!")callback = greet # ✅ passing the function itself
callback()
Here, you want the function object—not execution—yet.
๐ง Simple Rule to Remember
๐ No parentheses → No execution
๐ Always use () to call a function
๐ Final Takeaway
If your program runs without errors but nothing happens,
check this first:
๐ Did you forget the parentheses?
It’s small.
It’s silent.
And it causes hours of confusion.
Day 33:Using list() instead of generator for large data
๐ Python Mistakes Everyone Makes ❌
Day 33: Using list() Instead of a Generator for Large Data
When working with large datasets, how you iterate matters a lot. One small choice can cost you memory, time, and even crash your program.
❌ The Mistake
Creating a full list when you only need to loop once.
numbers = list(range(10_000_000))
for n in numbers: process(n)
This builds all 10 million numbers in memory before doing any work.
❌ Why This Fails
Uses a lot of memory
Slower startup time
Completely unnecessary if data is used once
Can crash programs with very large datasets
✅ The Correct Way
Iterate lazily using a generator (range is already one).
def process(n):# simulate some workif n % 1_000_000 == 0:print(f"Processing {n}")for n in range(10_000_000):
process(n)
This processes values one at a time, without storing them all.
๐ง Simple Rule to Remember
๐ If data is large and used once → use a generator
๐ Use lists only when you need all values at once
๐ Key Takeaways
Generators are memory-efficient
range() is already lazy in Python 3
Avoid list() unless you truly need the list
Small choices scale into big performance wins
Efficient Python isn’t about fancy tricks it's about making the right default choices ๐
Saturday, 17 January 2026
Day 32: Confusing Shallow vs Deep Copy
๐ Python Mistakes Everyone Makes ❌
Day 32: Confusing Shallow vs Deep Copy
Copying data structures in Python looks simple—but it can silently break your code if you don’t understand what’s really being copied.
❌ The Mistake
Assuming copy() (or slicing) creates a fully independent copy.
a = [[1, 2], [3, 4]]b = a.copy()b[0].append(99)
print(a)
๐ Surprise: a changes too.
❌ Why This Fails
copy() creates a shallow copy
Only the outer list is duplicated
Inner (nested) objects are shared
Modifying nested data affects both lists
So even though a and b look separate, they still point to the same inner lists.
✅ The Correct Way
Use a deep copy when working with nested objects.
import copya = [[1, 2], [3, 4]]b = copy.deepcopy(a)b[0].append(99)
print(a)
๐ Now a remains unchanged.
๐ง Simple Rule to Remember
✔ Shallow copy → shares inner objects
✔ Deep copy → copies everything recursively
๐ Key Takeaways
Not all copies are equal in Python
Nested data requires extra care
Use deepcopy() when independence matters
Understanding this distinction prevents hidden bugs that are extremely hard to debug later ๐ง ⚠️
Popular Posts
-
In today’s world, data is not just digital — it’s geospatial . Every day, satellites capture massive amounts of imagery about our planet. ...
-
Code Explanation: ๐น Step 1: Create Tuple a = (1, [2, 3]) A tuple is created → (1, [2, 3]) Tuple is immutable ❌ But it contains a list [2,...
-
Explanation: ๐น 1. Variable Assignment clcoding is a variable used to store a value. ๐น 2. int() Function int() converts a value into an i...
-
Explanation: ๐น Step 1: Create List x = [1, 2, 3] A list is created ๐ x = [1, 2, 3] ๐น Step 2: Start Loop for i in x: Python starts itera...
-
Explanation: ๐น Step 1: Define Function def f(x, y=2): return x*y Function f takes: x → required argument y → default value = 2 It returns: ...
-
๐ Day 22/150 – Simple Interest in Python Calculating Simple Interest (SI) is a fundamental concept in both mathematics and programming. It...
-
Explanation: 1. Creating the Data Structure data = [[1, 2], [3, 4]] data is a list. Inside it, there are two smaller lists: First list: [1...
-
April Python Bootcamp Day 2 April Python Bootcamp Day 3 April Python Bootcamp Day 4 April Python Bootcamp Day 5 April Python Bootcamp Day ...
-
Code Explanation: ๐ธ 1. if True: ✔ Meaning: if is a conditional statement in Python. It checks whether a condition is True or False. ✔ In ...
-
Python Programming for Beginners: A Step-by-Step Guide to Learning Python and Building Your First Game in Less Than a Month Programming ofte...
%20(1).png)


.png)







.png)
.png)
.png)
%20Unsafely.png)


%20Instead%20of%20a%20Generator%20for%20Large%20Data.png)

