<![CDATA[Fezcodex]]> https://fezcode.com https://fezcode.com/logo512.png Fezcodex https://fezcode.com RSS for Node Sun, 08 Feb 2026 02:53:02 GMT Sun, 08 Feb 2026 02:53:01 GMT 60 <![CDATA[Linux vs. Unix: The Kernel Wars and the Philosophy of Modular Design]]> https://fezcode.com/blog/linux-vs-unix-the-kernel-wars https://fezcode.com/blog/linux-vs-unix-the-kernel-wars Sun, 08 Feb 2026 00:00:00 GMT Linux vs. Unix: The Kernel Wars and the Philosophy of Modular Design

Read more...

]]>
<![CDATA[The Halo Effect: Why We Trust Idiots with Good Hair]]> https://fezcode.com/blog/the-halo-effect https://fezcode.com/blog/the-halo-effect Sun, 08 Feb 2026 00:00:00 GMT

⚠️ Warning: Objects in Mirror Are Less Perfect Than They Appear

If you think this blog post is genius just because the font is nice and the layout is clean, you are currently being blinded by the very thing I'm about to roast. Welcome to the glow.

Read more...

]]>
<![CDATA[Mastering Git Worktrees: Parallel Development with AI Agents]]> https://fezcode.com/blog/mastering-git-worktrees-and-ai https://fezcode.com/blog/mastering-git-worktrees-and-ai Sun, 08 Feb 2026 00:00:00 GMT Mastering Git Worktrees: Parallel Development with AI Agents

Read more...

]]>
<![CDATA[Sartre & Camus - Existentialism and Absurdism]]> https://fezcode.com/blog/existentialism https://fezcode.com/blog/existentialism Sat, 07 Feb 2026 00:00:00 GMT Philosophy 101: Sartre & Camus - Existentialism and Absurdism

Read more...

]]>
<![CDATA[Wittgenstein - The Fly in the Fly-Bottle]]> https://fezcode.com/blog/wittgenstein https://fezcode.com/blog/wittgenstein Sat, 07 Feb 2026 00:00:00 GMT Philosophy 101: Wittgenstein - The Fly in the Fly-Bottle

Read more...

]]>
<![CDATA[Nietzsche - The Death of God and the Übermensch]]> https://fezcode.com/blog/nietzsche https://fezcode.com/blog/nietzsche Fri, 06 Feb 2026 00:00:00 GMT Philosophy 101: Nietzsche - The Death of God and the Übermensch

Read more...

]]>
<![CDATA[Heidegger - Being and Time]]> https://fezcode.com/blog/heidegger https://fezcode.com/blog/heidegger Fri, 06 Feb 2026 00:00:00 GMT Philosophy 101: Heidegger - Being and Time

Read more...

]]>
<![CDATA[Understanding Database Normalization: The Path to Third Normal Form (3NF)]]> https://fezcode.com/blog/understanding-database-normalization-3nf https://fezcode.com/blog/understanding-database-normalization-3nf Fri, 06 Feb 2026 00:00:00 GMT Database Normalization: A Clear Guide to 1NF, 2NF, and 3NF

Read more...

]]>
<![CDATA[CQRS: Command Query Responsibility Segregation in Modern Architecture]]> https://fezcode.com/blog/cqrs-in-go-for-geniuses https://fezcode.com/blog/cqrs-in-go-for-geniuses Fri, 06 Feb 2026 00:00:00 GMT CQRS: Command Query Responsibility Segregation in Modern Architecture

Read more...

]]>
<![CDATA[Kierkegaard - The Leap of Faith]]> https://fezcode.com/blog/kierkegaard https://fezcode.com/blog/kierkegaard Thu, 05 Feb 2026 00:00:00 GMT Philosophy 101: Kierkegaard - The Leap of Faith

Read more...

]]>
<![CDATA[Hyrum's Law: Why Your Bug Fix Broke My Spacebar Heating Workflow]]> https://fezcode.com/blog/hyrums-law https://fezcode.com/blog/hyrums-law Thu, 05 Feb 2026 00:00:00 GMT

⚠️ Warning: Behavioral Changes Ahead

If you rely on the specific way this blog post is formatted to scrape it for your AI training data, I apologize in advance. By reading this, you are effectively becoming an example of the very law I am about to explain.

Read more...

]]>
<![CDATA[Hegel - The Dialectic and World Spirit]]> https://fezcode.com/blog/hegel https://fezcode.com/blog/hegel Wed, 04 Feb 2026 00:00:00 GMT Philosophy 101: Hegel - The Dialectic and World Spirit

Read more...

]]>
<![CDATA[Schopenhauer - The Will and Pessimism]]> https://fezcode.com/blog/schopenhauer https://fezcode.com/blog/schopenhauer Wed, 04 Feb 2026 00:00:00 GMT Philosophy 101: Schopenhauer - The Will and Pessimism

Read more...

]]>
<![CDATA[René Descartes - The Father of Modern Philosophy]]> https://fezcode.com/blog/descartes https://fezcode.com/blog/descartes Tue, 03 Feb 2026 00:00:00 GMT Philosophy 101: René Descartes - The Father of Modern Philosophy

Read more...

]]>
<![CDATA[Immanuel Kant - The Thing-in-Itself]]> https://fezcode.com/blog/kant https://fezcode.com/blog/kant Tue, 03 Feb 2026 00:00:00 GMT Philosophy 101: Immanuel Kant - The Thing-in-Itself

Read more...

]]>
<![CDATA[Ethics - What Should We Do?]]> https://fezcode.com/blog/ethics https://fezcode.com/blog/ethics Mon, 02 Feb 2026 00:00:00 GMT Philosophy 101: Ethics - What Should We Do?

Read more...

]]>
<![CDATA[The Big Three - Socrates, Plato, Aristotle]]> https://fezcode.com/blog/the-big-three https://fezcode.com/blog/the-big-three Mon, 02 Feb 2026 00:00:00 GMT Philosophy 101: The Big Three - Socrates, Plato, Aristotle

Read more...

]]>
<![CDATA[Al-Ghazali - The Incoherence of the Philosophers]]> https://fezcode.com/blog/al-ghazali https://fezcode.com/blog/al-ghazali Mon, 02 Feb 2026 00:00:00 GMT Philosophy 101: Al-Ghazali - The Incoherence of the Philosophers

Read more...

]]>
<![CDATA[Introduction - The Examined Life]]> https://fezcode.com/blog/introduction https://fezcode.com/blog/introduction Sun, 01 Feb 2026 00:00:00 GMT Philosophy 101: Introduction - The Examined Life

Read more...

]]>
<![CDATA[Logic - The Toolbox]]> https://fezcode.com/blog/logic-and-arguments https://fezcode.com/blog/logic-and-arguments Sun, 01 Feb 2026 00:00:00 GMT Philosophy 101: Logic - The Toolbox

Read more...

]]>
<![CDATA[Epistemology - How Do You Know That?]]> https://fezcode.com/blog/epistemology https://fezcode.com/blog/epistemology Sun, 01 Feb 2026 00:00:00 GMT Philosophy 101: Epistemology - How Do You Know That?

Read more...

]]>
<![CDATA[Metaphysics - What is Real?]]> https://fezcode.com/blog/metaphysics https://fezcode.com/blog/metaphysics Sun, 01 Feb 2026 00:00:00 GMT Philosophy 101: Metaphysics - What is Real?

Read more...

]]>
<![CDATA[Architecting Trust: 5 Patterns to Prevent Insider Threats]]> https://fezcode.com/blog/architecting-trust-preventing-insider-threats https://fezcode.com/blog/architecting-trust-preventing-insider-threats Fri, 23 Jan 2026 00:00:00 GMT Architecting Trust: 5 Patterns to Prevent Insider Threats

Read more...

]]>
<![CDATA[Deep Link Configuration: Achieving a Global Parameter Observer in React]]> https://fezcode.com/blog/deep-link-configuration-with-url-parameters https://fezcode.com/blog/deep-link-configuration-with-url-parameters Wed, 21 Jan 2026 00:00:00 GMT Deep Link Configuration: Achieving a Global Parameter Observer in React

Read more...

]]>
<![CDATA[Introducing Fezluxe: A Study in Refined Architectural Elegance]]> https://fezcode.com/blog/introducing-fezluxe-refined-architectural-elegance https://fezcode.com/blog/introducing-fezluxe-refined-architectural-elegance Tue, 20 Jan 2026 00:00:00 GMT Introducing Fezluxe: A Study in Refined Architectural Elegance

Read more...

]]>
<![CDATA[The FIFA Paradox: Why We Spend 14 Hours Playing Games We Hate]]> https://fezcode.com/blog/gun-and-ball https://fezcode.com/blog/gun-and-ball Mon, 19 Jan 2026 00:00:00 GMT

⚠️ Disclaimer: Open Analysis

This post explores game data using statistical analysis. Please note that while I am an experienced engineer, I am not a specialized Data Scientist. I have made the code and data available in GitHub for transparency. If you find errors in the methodology or want to improve the model, I welcome your feedback and pull requests.

Read more...

]]>
<![CDATA[Dying is Easy, Comedy is Statistically Impossible: An IMDbayes Analysis]]> https://fezcode.com/blog/what-genre-should-i-watch https://fezcode.com/blog/what-genre-should-i-watch Sun, 18 Jan 2026 00:00:00 GMT

This analysis was built by a Software Engineer relying on 8-year-old university memories of statistics. If the math looks wrong, just assume it's a feature, not a bug. You can always contact me.

Deconstructing Hollywood: A Data Science Journey from Raw Data to p99 Insights

As software engineers, we are used to deterministic systems. If a = b, then a equals b. Data Science, however, deals with probability, distributions, and noise. It's less about "what is the answer" and more about "how confident are we in this trend?"

Recently, I wanted to bridge my engineering background with data science to answer a simple pop-culture question: How do different movie genres actually perform?

Are "Action" movies inherently rated lower than "Dramas"? Is it harder to make a masterpiece "Horror" movie than a masterpiece "Biography"?

To answer this, I didn't just want to run a script; I wanted to build a production-grade Data Science lab?!. (/s) This post details the entire journey—from choosing the modern Python stack and engineering the data pipeline to defining the statistical metrics that reveal the "truth" behind average ratings.

Part 1: The Engineering Foundation

A data project is only as good as its environment. I wanted a setup that was fast, reproducible, and clean.

The Stack Decision

I chose Python because it is the undisputed lingua franca of data science. The ecosystem (Pandas for data crunching, Seaborn for visualization) is unmatched.

The Package Manager: Why uv?

Traditionally, Python data science relies on Conda because it manages complex C-library dependencies used by math libraries like NumPy. However, Conda can be slow and bloated.

For this project, I chose uv.

uv is a modern, blazing-fast Python package manager written in Rust. It replaces pip, poetry, and virtualenv. It resolves dependencies in milliseconds and creates deterministic environments instantly. For a project relying on standard wheels like Pandas, uv provides a vastly superior developer experience.

# Setting up the environment took seconds
$ uv init movie-analysis
$ uv python install 3.10
$ uv add pandas matplotlib seaborn scipy jupyter ipykernel

Then connected VS Code to this .venv created by uv, giving me a robust Jupyter Notebook experience right in the IDE.

Part 2: The Data Pipeline (ETL)

I needed data with genres, votes, and ratings, went straight to the source: the IMDb Non-Commercial Datasets.

Then I faced a classic data engineering challenge: these are massive TSV (Tab Separated Values) files. Loading the entirety of IMDb into RAM on a laptop is a bad idea.

Solution? Build a Python ETL script to handle ingestion smartly:

  1. Stream & Filter: used Pandas to read the raw files in chunks, filtering immediately for titleType == 'movie' and excluding older films. This kept memory usage low.
  2. Merge: joined the title.basics (genres/names) with title.ratings (scores/votes) on their unique IDs.
  3. The "Explode": This was the crucial data transformation step. IMDb lists genres as a single string: "Action,Adventure,Sci-Fi". To analyze by category, I had to split that string and "explode" the dataset, duplicating the movie row for each genre it belongs to.
# Transforming "Action,Comedy" into two distinct analysis rows
df['genres'] = df['genres'].str.split(',')
df_exploded = df.explode('genres')

Part 3: The Science (Beyond Averages)

With clean data in hand, we moved into a Jupyter Notebook for Exploratory Data Analysis (EDA).

1. Removing the Noise (The Long Tail)

If you average every movie on IMDb, your data is polluted by home videos with 5 votes from the director's family. In statistics, vote counts often follow a "Power Law" or long-tail distribution.

To analyze global sentiment, we had to filter out the noise. We set a threshold, dropping any movie with fewer than 100 votes. This ensured our statistical analysis was based on titles with a minimum level of public engagement.

2. Visualizing the Truth (The Box Plot)

A simple average rating is misleading. If a genre has many 1/10s and many 10/10s, the average is 5/10 - but that doesn't tell the story of how polarizing it is.

I used a Box Plot to visualize the distribution. It shows the median (the center line), the Interquartile Range (the colored box containing the middle 50% of data), and outliers (the dots).

The Box Plot

Initial Observations:

  • Documentary/Biography: High medians, compact boxes. They are consistently rated highly.
  • Horror: The lowest median and a wide spread. It’s very easy to make a bad horror movie.

3. The Metrics: Weighted Ratings & p99

To get deeper insights, I needed better math than simple means.

Metric A: The Weighted Rating (Bayesian Average)

How do you compare a movie with a 9.0 rating and 105 votes against an 8.2 rating with 500,000 votes? The latter score is more statistically significant.

I adopted IMDb's own Weighted Rating formula. This "Bayesian average" pulls a movie's rating toward the global average $C$ if it has few votes $v$, only allowing it to deviate as it gains more votes over a threshold $m$.

$$ WR = \left( \frac{v}{v+m} \cdot R \right) + \left( \frac{m}{v+m} \cdot C \right) $$

Where:

  • $R$ = Average Rating of the movie
  • $v$ = Number of votes for the movie
  • $m$ = Minimum votes required to be listed (Threshold: 100)
  • $C$ = Mean vote across the whole dataset

This provided a fair "Quality Score" for every movie.

Metric B: The p99 Ceiling

I wanted to know the "potential" of a genre. Even if most Action movies are mediocre, how good are the very best ones?

For this, I calculated the 99th Percentile (p99) rating for each genre. This is the rating value below which 99% of the genre falls. It represents the elite tier, the "Masterpiece Ceiling."

Part 4: The Deductions (The Gap Analysis)

By combining the Average Weighted Rating (the typical experience) and the p99 Rating (the elite potential), we created a "Gap Analysis" chart.

The dark green bar is the average quality. The total height of the bar is the p99 ceiling. The light green area represents the "Masterpiece Gap".

Masterpiece Gap

The Data Science Deductions

This single chart reveals the "personality" of every genre:

  1. The "Safe Bets" (Documentary, History, Biography): They have very high averages (tall dark bars) and a small gap to the ceiling. Deduction: It is difficult to make a poorly rated documentary. Audience selection bias likely plays a role here (people only watch docs on topics they already like).

  2. The "High Risk / High Reward" (Horror, Sci-Fi): They have the lowest averages (short dark bars), indicating the typical output is poor. However, their p99 ceilings remain high. Deduction: The gap is huge. It is incredibly difficult to execute these genres well, but when it's done right (e.g., Alien, The Exorcist), they are revered just as highly as dramas.

  3. The Animation Anomaly: Animation has a high average and a very high ceiling. Deduction: Statistically, this is perhaps the most consistently high-quality genre in modern cinema.

Conclusion

This project demonstrated that with a solid engineering setup using modern tools like uv, and by applying statistical concepts beyond simple averages, we can uncover nuanced truths hidden in raw data. Averages tell you what is probable; distributions and percentiles tell you what is possible.

Question A: Which genre is "easier" to make? (Action vs. Drama vs. Comedy)

The Data Verdict: It is significantly "easier" to make an acceptable Drama than an acceptable Action or Comedy movie.

  • Evidence: Look at the box plot, kindly.
    • Drama has a high median and a "tight" box (smaller Interquartile Range). This means even "average" Dramas are usually rated around 6.5–7.0. The "floor" is high.
    • Action has a lower median. Action movies require budget, stunts, and effects. If those look cheap, the rating tanks immediately. A bad drama is just "boring" (5/10); a bad action movie looks "broken" (3/10).
    • Comedy is arguably the hardest to get a high rating for. Humor is subjective. If a joke lands for 50% of the audience but annoys the other 50%, the rating averages out to a 5.0. Drama is universal; Comedy is divisive.

Question B: Should I use lower search bounds for Comedy compared to Drama?

The Data Verdict: YES. Absolutely.

  • The "Genre Inflation" Factor: Users rate genres differently. A 7.0 in Horror or Comedy is effectively an 8.0 in Drama or Biography.
    • The Strategy: If you filter for Rating > 7.5, you will see hundreds of Biographies, but you will filter out some of the funniest Comedies ever made (which often sit at 6.8 - 7.2).
    • Action/Comedy Filter: Set your threshold to 6.5.
    • Drama/Doc Filter: Set your threshold to 7.5.

Question C: The "Blindfold Test" (Documentary vs. Sci-Fi)

The Data Verdict: You will be statistically safer picking the Documentary.

  • The "Floor" Concept: Look at the "Whiskers" (the lines extending from the boxes) on the box plot.

    • Sci-Fi: The bottom whisker goes deep down (towards 1.0 or 2.0). There is a significant statistical probability that a random Sci-Fi movie is unwatchable garbage.
    • Documentary: The bottom whisker rarely dips below 5.0 or 6.0.
  • The Psychology:

    • Documentaries are usually made by passionate experts about specific topics. They rarely "fail" completely.
    • Sci-Fi is high-risk. It attempts to build new worlds. When that fails, it looks ridiculous, leading to "hate-watching" and 1-star reviews.
    • Conclusion: If you are tired and just want a "guaranteed decent watch" (Low Variance), pick Documentary. If you want to gamble for a potentially mind-blowing experience (High Variance), pick Sci-Fi.

You can check the project here: IMDbayes

Read more...

]]>
<![CDATA[Upgrading Debian 11 to 13: The Safe Path]]> https://fezcode.com/blog/debian-upgrade-path https://fezcode.com/blog/debian-upgrade-path Mon, 12 Jan 2026 00:00:00 GMT Upgrading Debian 11 to 13: The Safe Path

So, you're on Debian 11 (Bullseye) and want to jump to Debian 13 (Trixie). Maybe you saw some shiny new package, or you just want to be on the cutting edge (or as cutting edge as Debian gets).

But here's the catch: You can't skip a version.

Debian upgrades are designed to be sequential. Jumping from 11 straight to 13 is a recipe for a broken system (frankstein packages, dependency hell, the works). The safe path is 11 → 12 → 13.

Here is the quick gist of how to do it properly.

Phase 1: Bullseye (11) to Bookworm (12)

First, make sure your current system is fully updated and clean.

# Clean up any broken sources first!
# If you have 404 errors on backports, comment them out in /etc/apt/sources.list
sudo apt update
sudo apt full-upgrade -y

Now, switch your sources to Bookworm.

sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

Run the upgrade. This is the big one.

sudo apt update
sudo apt full-upgrade -y

Reboot your system.

Phase 2: Bookworm (12) to Trixie (13)

Welcome back. You are now on Debian 12. Let's keep going.

Update your sources to Trixie.

sudo sed -i 's/bookworm/trixie/g' /etc/apt/sources.list

Run the upgrade again.

sudo apt update
sudo apt full-upgrade -y

Phase 3: Cleanup

You made it. Now clean up the leftovers.

sudo apt autoremove -y
sudo reboot

Verification

When you're back, check your version:

cat /etc/debian_version
# Should output 13.x (or testing/trixie)

And that's it. You have successfully time traveled.

Read more...

]]>
<![CDATA[Steganography: Hiding Secrets in Plain Sight with LSB]]> https://fezcode.com/blog/steganography-lsb-deep-dive https://fezcode.com/blog/steganography-lsb-deep-dive Mon, 12 Jan 2026 00:00:00 GMT Steganography is the art and science of hiding information within other non-secret data. Unlike cryptography, which scrambles a message so it can't be read, steganography hides the very existence of the message.

In this deep dive, we'll explore the implementation of the Steganography Tool added to Fezcodex, focusing on the Least Significant Bit (LSB) technique.

The Core Concept: Least Significant Bit (LSB)

Digital images are made up of pixels. In a standard 24-bit RGB image, each pixel has three color channels: Red, Green, and Blue. Each channel is represented by 8 bits (a value from 0 to 255).

Example of a pixel's color:

  • Red: 10110101 (181)
  • Green: 01100110 (102)
  • Blue: 11001011 (203)

The Least Significant Bit is the rightmost bit in these binary strings. If we change this single bit, the decimal value of the color channel only changes by 1. For example, changing the Red channel from 10110101 (181) to 10110100 (180) is a change so subtle that the human eye cannot detect it in a complex image.

By replacing the LSB of each color channel with a bit from our secret message, we can embed data directly into the image.

The Protocol: FEZ Steganography

To make the extraction process reliable, we've implemented a simple protocol:

  1. Magic Header (FEZ): The first 24 bits (3 bytes) of the hidden data always spell "FEZ". This allows the decoder to verify if an image actually contains a hidden message from our tool.
  2. Length (32-bit): The next 32 bits represent the length of the message in bytes. This tells the decoder exactly when to stop reading.
  3. The Message: The remaining bits are the actual UTF-8 encoded message.

Tracing the Magic: Encoding "FEZ"

Let's look at how the magic header FEZ is scattered across the first few pixels.

Step 1: Convert characters to binary

  • F (70): 0 1 0 0 0 1 1 0
  • E (69): 0 1 0 0 0 1 0 1
  • Z (90): 0 1 0 1 1 0 1 0

Combined Bitstream: 01000110 + 01000101 + 01011010 (24 bits total)

Step 2: Embed into pixels Since each pixel has 3 channels (R, G, B), we need 8 pixels to hide these 24 bits.

Pixel Channel Original Byte Bit to Hide Modified Byte
Pixel 1 Red 10110101 0 (from F) 10110100
Green 01100110 1 (from F) 01100111
Blue 11001011 0 (from F) 11001010
Pixel 2 Red 01010100 0 (from F) 01010100
Green 11110011 0 (from F) 11110010
Blue 00110011 1 (from F) 00110011
Pixel 3 Red 10101010 1 (from F) 10101011
Green 11001101 0 (from F) 11001100
Blue 00011110 0 (from E) 00011110
Pixel 4 Red 10110010 1 (from E) 10110011
Green 01101101 0 (from E) 01101100
Blue 11100011 0 (from E) 11100010
Pixel 5 Red 01010101 0 (from E) 01010100
Green 11110010 1 (from E) 11110011
Blue 00110011 0 (from E) 00110010
Pixel 6 Red 10101010 1 (from E) 10101011
Green 11001101 0 (from Z) 11001100
Blue 01011110 1 (from Z) 01011111
Pixel 7 Red 10110011 0 (from Z) 10110010
Green 01101100 1 (from Z) 01101101
Blue 11100011 1 (from Z) 11100011
Pixel 8 Red 01010101 0 (from Z) 01010100
Green 11110010 1 (from Z) 11110011
Blue 00110011 0 (from Z) 00110010

By the time we reach Pixel 8, all 24 bits of "FEZ" are woven into the image. If you open this in a hex editor, you might see that the color 181 became 180, but the text "FEZ" is nowhere to be found in the raw bytes!

Why PNG and not JPEG?

Our tool works best with PNG files. Why?

  • PNG (Portable Network Graphics) is a lossless format. It preserves every single bit exactly as it was saved.
  • JPEG (Joint Photographic Experts Group) is a lossy format. It uses compression algorithms that slightly alter pixel values to reduce file size. These tiny changes are fine for human viewing, but they destroy the data we've hidden in the LSBs.

The Implementation (JavaScript/Canvas)

We use the HTML5 <canvas> API to access and manipulate image data at the pixel level.

Encoding Logic

const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const data = imageData.data; // Uint8ClampedArray [R, G, B, A, R, G, B, A, ...]

// ... transform message to bits ...

let bitIndex = 0;
for (let i = 0; i < data.length && bitIndex < allBits.length; i += 4) {
  for (let j = 0; j < 3 && bitIndex < allBits.length; j++) {
    // Replace LSB of R, G, or B
    // (data[i + j] & 0xfe) clears the last bit
    // | allBits[bitIndex++] sets it to our secret bit
    data[i + j] = (data[i + j] & 0xfe) | allBits[bitIndex++];
  }
}
ctx.putImageData(imageData, 0, 0);

Decoding Logic

Decoding is the reverse process. We iterate through the pixels, extract the LSB of each R, G, and B channel, and rebuild the bitstream until we've parsed the header, the length, and finally the message content.

Challenges and Limitations

  • Capacity: The amount of data you can hide depends on the image resolution. Each pixel can hold 3 bits (1 for each RGB channel). A 1080p image (1920x1080) can theoretically hold about 777 KB of hidden data.
  • Robustness: LSB steganography is very fragile. Resizing, cropping, or re-saving the image as a JPEG will likely corrupt the hidden message.
  • Security: Pure LSB is "security through obscurity." Anyone who knows the technique can extract the message. For true security, you should encrypt the message before hiding it in the image.

Try it out!

Check out the Steganography Tool in the Applications section and start sending your own cryptic signals through the digital aether.

Read more...

]]>
<![CDATA[The Ultimate Pixel Art Resources Guide]]> https://fezcode.com/blog/pixel-art-resources-guide https://fezcode.com/blog/pixel-art-resources-guide Mon, 12 Jan 2026 00:00:00 GMT Pixel art is a beautiful and nostalgic medium that has seen a massive resurgence in recent years, especially within the indie game development scene. Whether you're a seasoned artist or just starting, having the right tools and resources can make a world of difference.

In this post, we'll explore some of the best resources for pixel art, inspired by the excellent guide by JuniperDev.

The Ultimate Video Guide

First and foremost, if you haven't seen it yet, check out this comprehensive video: The ONLY Pixel Art Guide You Need (Beginner to Advanced)

Essential Software: Aseprite

When it comes to creating pixel art, Aseprite is widely considered the industry standard. It's not just a drawing tool; it's a specialized environment for sprites and animation.

  • Why it's great: Pixel-perfect drawing mode, specialized animation timeline, easy sprite sheet exporting, and a very active community.
  • Where to get it: aseprite.org

Mastering Color: Lospec

Color is everything in pixel art. Since you're often working with a limited palette, choosing the right colors is crucial. Lospec is the go-to resource for this.

  • Palette Database: Thousands of pre-made palettes that you can filter by color count.
  • Tools: They also offer web-based tools for checking contrast and creating your own palettes.
  • Link: lospec.com

Assets and Inspiration: itch.io

Sometimes you need a head start, or you just want to see how other artists tackle specific challenges. itch.io is a goldmine for pixel art assets.

  • What you'll find: Character sprites, tilesets, UI elements, and full environmental packs.
  • Learning from others: Analyzing high-quality asset packs is one of the best ways to learn techniques like dithering, sub-pixeling, and cluster management.
  • Link: itch.io/game-assets/free/tag-pixel-art

Quick Tips for Beginners

  1. Start Small: Don't try to draw a massive 256x256 piece right away. Start with 16x16 or 32x32.
  2. Limit Your Palette: Using too many colors can make your art look messy. Stick to 4-8 colors initially.
  3. Study Real Life: Even though it's stylized, good pixel art is often grounded in real-world lighting and anatomy.
  4. Practice Your Lines: Learn about "jaggies" and how to avoid them to keep your lines looking clean and intentional.

Pixel art is as much about what you leave out as what you put in. Happy pixeling!

Read more...

]]>
<![CDATA[Implementing Drag and Drop in React without Libraries]]> https://fezcode.com/blog/implementing-drag-and-drop-in-react https://fezcode.com/blog/implementing-drag-and-drop-in-react Sat, 10 Jan 2026 00:00:00 GMT When building Tier Forge, I needed a flexible way to move items between the "pool" and various "tiers". While libraries like react-beautiful-dnd or dnd-kit are excellent, sometimes you just want full control without the overhead.

Here is how I implemented a robust drag-and-drop system using only the native HTML5 API and React state.

The State Architecture

The key to a good DnD system is centralized state. In TierForge, the state is held in the parent component:

const [tiers, setTiers] = useState(DEFAULT_TIERS); // The board
const [poolItems, setPoolItems] = useState([]);    // The unranked items
const [dragData, setDragData] = useState(null);    // What are we dragging?

We track dragData to know what is moving (itemId) and where it came from (sourceId).

The Handlers

We need three main handlers: onDragStart, onDragOver, and onDrop.

1. Starting the Drag

When a user grabs an item, we store its ID and source container ID. We also set dataTransfer for compatibility.

const handleDragStart = (e, itemId, sourceId) => {
  setDragData({ itemId, sourceId });
  e.dataTransfer.effectAllowed = 'move';
  // Fallback for some browsers
  e.dataTransfer.setData('text/plain', JSON.stringify({ itemId, sourceId }));
};

2. Allowing the Drop

By default, HTML elements don't accept drops. We must prevent the default behavior.

const handleDragOver = (e) => {
  e.preventDefault();
  e.dataTransfer.dropEffect = 'move';
};

3. Handling the Drop

This is where the magic happens. When an item is dropped, we:

  1. Identify the Source (where it came from) and Target (where it landed).
  2. If Source === Target, do nothing (or reorder).
  3. Find the item in the Source array.
  4. Remove it from the Source.
  5. Add it to the Target.
const handleDrop = (e, targetId) => {
  e.preventDefault();
  const data = dragData || JSON.parse(e.dataTransfer.getData('text/plain'));
  if (!data) return;

  const { itemId, sourceId } = data;
  if (sourceId === targetId) return;

  // ... Logic to find item, remove from source, add to target ...
  // This involves setTiers() and setPoolItems() updates.
};

The Components

Draggable Item

The item itself needs the draggable attribute and the start handler.

<div
  draggable
  onDragStart={(e) => onDragStart(e, item.id, sourceId)}
  className="cursor-grab active:cursor-grabbing ..."
>
  {/* Content */}
</div>

Drop Zone

The container (Tier or Pool) listens for drag-over and drop events.

<div
  onDragOver={handleDragOver}
  onDrop={(e) => handleDrop(e, containerId)}
  className="..."
>
  {/* Render Items */}
</div>

Why Native API?

  1. Zero Dependencies: Keeps the bundle size small.
  2. Full Control: I can define exactly how state updates happen.
  3. Performance: Direct DOM events are highly performant.

This pattern powers the entire Tier Forge experience, allowing smooth transitions of assets between the chaotic pool and the structured tiers.

Read more...

]]>
<![CDATA[Fixing gh-pages: Resolving spawn ENAMETOOLONG]]> https://fezcode.com/blog/gh-pages-enametoolong-fix https://fezcode.com/blog/gh-pages-enametoolong-fix Thu, 08 Jan 2026 00:00:00 GMT Resolving spawn ENAMETOOLONG in gh-pages Deployment

If you've been using the gh-pages package for a while, especially in projects with large build folders or complex structures, you might have encountered the dreaded spawn ENAMETOOLONG error when running your deploy script.

The Problem

When executing the standard deployment command:

gh-pages -d build -b gh-pages

The process fails with a system error indicating that the argument list or the command path itself has exceeded the operating system's limits. This is often related to how the underlying globby or async dependencies handle file lists in older versions of the package (like 6.3.0).

The issue is documented and discussed in detail here: gh-pages Issue #585.

The Fix

The specific fix for this issue was highlighted in this GitHub comment, which explains that the ENAMETOOLONG error occurs on Windows when the rm command receives an excessively long list of files as arguments.

diff --git a/lib/git.js b/lib/git.js
index d4c5724272d00bd1f0d76c47dab47d21ccd094d9..d86ac2b0bd7cbc02f34a50dac6980965102ee964 100644
--- a/lib/git.js
+++ b/lib/git.js
@@ -143,7 +143,7 @@ Git.prototype.rm = function (files) {
   if (!Array.isArray(files)) {
     files = [files];
   }
-  return this.exec('rm', '--ignore-unmatch', '-r', '-f', '--', ...files);
+  return this.exec('rm', '--ignore-unmatch', '-r', '-f', '--', '.');
 };

 /**

The suggested workarounds included batching the file deletions or simplifying the command to target the current directory (.) instead of individual files. Fortunately, these improvements (including a more robust batching logic and a migration to tinyglobby) have already been merged into the main branch of the repository via PR #607.

While we wait for a stable release on NPM that fully addresses this in all environments, the most effective way to resolve it is to use the latest development version directly from the source.

By updating your package.json to point to the GitHub repository's main branch, you get the latest fixes (including the migration to tinyglobby and updated commander logic) that bypass these system limits.

Implementation

Update your package.json dependencies:

"devDependencies": {
  "gh-pages": "github:tschaub/gh-pages"
}

Then, refresh your installations:

npm install

This simple change allowed us to resume our production deployments without hitches, ensuring that our "Brutalist" digital garden stays fresh and accessible.

Read more...

]]>
<![CDATA[Git Cheatsheet: From Basics to Time Travel]]> https://fezcode.com/blog/git-cheatsheet-gist https://fezcode.com/blog/git-cheatsheet-gist Thu, 08 Jan 2026 00:00:00 GMT Git Cheatsheet: From Basics to Time Travel

A collection of essential Git commands, from daily workflows to digging through the depths of your repository's history.

🔍 Searching History

Find when a file existed (even if deleted)

git log --all -- [path]

Search for content changes (Pickaxe)

Find commits where a specific string was added or removed:

git log -S "your_search_string"

Search content with Regex

git log -G "your_regex"

Find a file in any commit/branch

git rev-list --all | xargs git grep -l "filename"

See the history of a specific function/method

git log -L :function_name:file_path

🚀 Daily Workflow

Stage and Commit

git add .
git commit -m "feat: descriptive message"

Undo last commit (keep changes)

git reset --soft HEAD~1

Fix the last commit message

git commit --amend -m "new message"

🌿 Branching & Merging

Switch to a new branch

git checkout -b feature/cool-stuff
# or the newer way:
git switch -c feature/cool-stuff

List all branches (including remote)

git branch -a

Safely delete a branch

git branch -d branch_name

🛠️ Cleanup & Maintenance

Discard all local changes

git reset --hard HEAD

Clean untracked files

git clean -fd

Stash changes for later

git stash save "Work in progress"
git stash list
git stash pop

📤 Remote Operations

Update local with remote and rebase

git pull --rebase origin main

Prune old remote tracking branches

git fetch -p

Read more...

]]>
<![CDATA[Aether: Cyberpunk Audio Interface]]> https://fezcode.com/blog/aether-music-player https://fezcode.com/blog/aether-music-player Tue, 06 Jan 2026 00:00:00 GMT Aether: The Cyberpunk Music Player

I've just deployed Aether, a new cloud-based music player for Fezcodex.

Aether Music Player

Overview

Aether isn't just a music player; it's an atmospheric audio interface designed to immerse you in the soundscape of the site. It features:

  • Cyberpunk Aesthetic: A high-contrast, terminal-inspired interface with CRT scanlines, glitch effects, and a generative art background that reacts to the music.
  • Persistent Playback: A tiny, "cyber deck" style player docks to the bottom of your screen, allowing you to browse the site without interrupting your tunes.
  • Generative Art: If a track lacks cover art, the system generates a unique visual signature based on the track's title.

Check it out here: Aether Music Player

Enjoy the vibes.

Read more...

]]>
<![CDATA[5 Ways to Pass Arguments in a URL (Beyond the Basic Query)]]> https://fezcode.com/blog/5-ways-to-pass-arguments-in-a-url https://fezcode.com/blog/5-ways-to-pass-arguments-in-a-url Wed, 24 Dec 2025 00:00:00 GMT When building web applications or designing APIs, understanding how to transfer data is crucial. While Query Parameters (the bits after the ?) are the most common method, there are four other fundamental ways to pass arguments to a server via a URL or its associated HTTP request.

Here is a quick reference guide to the five main argument passing mechanisms:

1. Query Parameters

  • Location: Appears in the URL after a ? (question mark) and separated by & (ampersand) symbols.
  • Purpose: Used for optional parameters such as filtering, sorting, searching, or pagination controls.
  • Characteristics: Data is highly visible (in the URL, server logs, and browser history). It is typically used with GET requests.
  • Example: https://example.com/products?**category=1&sort=price**

2. Path Parameters

  • Location: Directly integrated into the URL's path structure.
  • Purpose: Used to uniquely identify a specific resource or define a hierarchical location.
  • Characteristics: Essential for defining clear, clean, and meaningful URLs, especially in RESTful API design.
  • Example: https://example.com/users/**123** or https://example.com/books/**sci-fi**/dune

3. Header Parameters

  • Location: Contained within the HTTP Request Header, invisible in the URL.
  • Purpose: Used for metadata about the request, such as authentication (e.g., API keys, tokens), content type, and language preferences.
  • Characteristics: Offers better security for sensitive, non-data payload information compared to Query Parameters, as it doesn't appear in the URL.
  • Example: Header: **Authorization: Bearer token** or Header: **Content-Type: application/json**

4. Fragment Identifier Arguments

  • Location: Appears at the very end of the URL after a # (hash symbol).
  • Purpose: Used for client-side functionality, like navigating to a specific section (anchor) on a page or managing application state in Single Page Applications (SPAs).
  • Characteristics: The browser does NOT send this part to the server; it is client-side only. It can still be used to pass data to the front-end application.
  • Example: https://example.com/page**#section-name**

5. Request Body Arguments

  • Location: Contained within the body (payload) of the HTTP request, invisible in the URL.
  • Purpose: Used for sending large data payloads when creating or updating resources (e.g., submitting a complex form, uploading a file, or sending a JSON object).
  • Characteristics: The primary method for data submission using POST, PUT, or PATCH HTTP methods. It is an HTTP request argument, not a true URL argument, and it is secure from URL exposure.
  • Example: (Data like a user object in JSON format is sent in the hidden body payload.)

Conclusion

By strategically selecting among Query, Path, Header, Fragment, or Body arguments, developers can ensure their data is transferred efficiently and securely, leading to a robust and scalable application architecture.

Read more...

]]>
<![CDATA[Quick Renaming .js to .jsx for React]]> https://fezcode.com/blog/renaming-js-to-jsx-gist https://fezcode.com/blog/renaming-js-to-jsx-gist Tue, 23 Dec 2025 00:00:00 GMT The Command (PowerShell)

I ran this specific script in your terminal. It uses "piping" (|) to pass the result of one command to the next, like a bucket brigade.

Get-ChildItem -Path src -Recurse -Filter *.js |
  Where-Object { $_.Name -notin @('index.js', 'reportWebVitals.js', 'setupTests.js') } |
  ForEach-Object {
    if (Select-String -Pattern "<[a-zA-Z]" -Path $_.FullName -Quiet) {
      Write-Host "Renaming $($_.Name) to .jsx";
      Rename-Item -LiteralPath $_.FullName -NewName ($_.Name -replace '\.js$', '.jsx')
    }
  }

Deep Dive: Commands & Arguments

Here is exactly what every part of that spell does:

1. Finding the files Get-ChildItem -Path src -Recurse -Filter *.js

  • Get-ChildItem: The standard command to list files (like ls or dir).
  • -Path src: We only look inside the src folder.
  • -Recurse: We dig deep into every subfolder, not just the top level.
  • -Filter *.js: We ignore everything except files ending in .js.

2. Filtering the list Where-Object { $_.Name -notin @(...) }

  • Where-Object: Acts like a bouncer; only lets items through that match the condition.
  • $_: Represents the "current file" being checked.
  • -notin: The condition operator. We are saying "The name must NOT be in this list".
  • @('index.js', ...): The list of system files we want to skip (leave as .js).

3. Processing each file ForEach-Object { ... }

  • ForEach-Object: Runs the code block inside { ... } for every single file that made it past the filter.

4. Checking for React Code (JSX) if (Select-String -Pattern "<[a-zA-Z]" -Path $_.FullName -Quiet)

  • Select-String: Searches for text inside a file (like grep).
  • -Pattern "<[a-zA-Z]": A Regex pattern. It looks for a < followed by a letter. This catches HTML tags like <div> or React components like <App>.
  • -Path $_.FullName: The full path to the file we are currently reading.
  • -Quiet: Important! This tells the command "Don't print the matching text, just tell me True or False."

5. Renaming the file Rename-Item -LiteralPath $_.FullName -NewName (...)

  • Rename-Item: The command to change a file's name.
  • -LiteralPath $_.FullName: We use the full path to ensure we target the exact right file.
  • -NewName ($_.Name -replace '\.js$', '.jsx'): We calculate the new name by taking the old name and swapping the ending .js with .jsx.

The Result

Now your code editor knows exactly which files contain UI components. You'll get better autocomplete, better color highlighting, and generally a much happier development experience.

Read more...

]]>
<![CDATA[The Corrupted Blood Incident: When a Glitch Taught Us About Pandemics]]> https://fezcode.com/blog/corrupted-blood-incident https://fezcode.com/blog/corrupted-blood-incident Sun, 21 Dec 2025 00:00:00 GMT Okay, gather 'round, fellow nerds and accidental epidemiologists. I need to talk about something that happened in World of Warcraft back in 2005. It's called the Corrupted Blood incident, and it's basically the coolest (and most terrifying) accidental science experiment in gaming history.

ELI5: What the Heck Happened?

Imagine you're playing a game, right? You and your 19 closest friends decide to go punch a giant blood god named Hakkar the Soulflayer in the face. This raid boss has a nasty spell called "Corrupted Blood."

Here's how it worked:

  1. You catch it: It drains your health. Fast.
  2. It spreads: If you stand near anyone else, they catch it too. Like a super-flu.
  3. It's meant for the boss room: The disease was programmed to disappear when you died or left the dungeon.

BUT HERE'S THE GLITCH.

Hunter pets (animals that players control) could catch the disease. If a player dismissed their pet while it was sick, the game "froze" the pet's state. When they summoned the pet back in a major city (like Ironforge or Orgrimmar), the pet came back... still sick.

Boom. Patient Zero.

The Virtual Apocalypse

Suddenly, high-level players' pets were nuking entire cities. Low-level players (newbies) were dropping dead instantly just by walking past the auction house. High-level players were scrambling to keep themselves alive, healing frantically.

It was chaos.

  • The Cities: Zones of death. Skeletons everywhere.
  • The Players: Panic. Some fled to the wilderness (social distancing!). Some deliberately spread it (trolls/bioterrorists). Healers tried to set up triage centers.
  • Blizzard (The Devs): They tried quarantines. Failed. They tried warnings. Failed. Eventually, they had to do a hard server reset to scrub the disease from existence.

Why Real Scientists Cared

Here's the wild part. Real-world epidemiologists (the doctors who study diseases) looked at this and went, "Holy crap, this is better than our computer models."

Usually, scientific models assume people act rationally. "If there is a plague, people will stay home." But in WoW, people did human things:

  • Curiosity: "What's happening over there?" -> Dies.
  • Malice: "Imma go infect the newbies lol." -> Spreads plague.
  • Altruism: "I'll heal you!" -> Gets infected, spreads it further.

This accidental glitch provided a perfect, unscripted look at human behavior during a crisis. It showed how fast things spread when people don't follow rules, how asymptomatic carriers (pets/high-level players) can destroy vulnerable populations (low-level players), and how hard it is to contain stupidity.

The GDC Legacy

This wasn't just a "remember when" moment. It became a serious case study. At GDC (Game Developers Conference), this incident is often cited as a prime example of emergent gameplay and complex systems gone wrong (or right, depending on your view).

It taught developers that players will always find a way to break containment. It taught scientists that "Gamer Behavior" might actually be a decent proxy for "Human Panic."

The Rant

It drives me crazy that we had this perfect simulation in 2005, and when 2020 rolled around, we saw the exact same behaviors IRL. The deniers, the spreaders, the people fleeing to the countryside. We didn't learn! We leveled up, but we didn't put any points into Wisdom!

TL;DR: A coding bug in a fantasy game predicted modern pandemic behavior better than some government models. Hakkar the Soulflayer is the ultimate teacher. Wash your hands, dismiss your pets responsibly, and for the love of Azeroth, stop standing in the fire.

Read more...

]]>
<![CDATA[Building the Knowledge Graph: Visualizing Fezcodex in 3D]]> https://fezcode.com/blog/building-the-knowledge-graph https://fezcode.com/blog/building-the-knowledge-graph Sun, 21 Dec 2025 00:00:00 GMT The idea was simple: Fezcodex is growing. With hundreds of blog posts, apps, and project logs, a standard list or grid view just wasn't cutting it anymore. I wanted a way to visualize the connections between everything. To see the "brain" of the website.

Enter the Knowledge Graph Visualization Protocol.

The Concept

I wanted a 3D, interactive network graph where:

  • Nodes represent content (Blog Posts, Apps, Projects).
  • Links represent relationships (Shared tags, Categories).
  • Interaction allows users to fly through the data and navigate to content.

It needed to feel like a "cyberspace" visualization from a sci-fi movie—immersive, dark, and slightly chaotic but organized.

The Tech Stack

  • React: The core framework.
  • react-force-graph-3d: The heavy lifter. This library uses WebGL (via Three.js) to render force-directed graphs with great performance.
  • PIML: My custom markup language for parsing project data.
  • Tailwind CSS: For the overlay UI and brutalist styling.

Implementation Details

1. Data Extraction (graphDataManager.js)

The first challenge was aggregating data from three different sources:

  • posts.json: A static JSON file containing blog metadata.
  • apps.json: A structured list of all the mini-apps.
  • projects.piml: A custom file format for my project portfolio.

I created a utility function fetchGraphData that pulls all three.

export const fetchGraphData = async () => {
  const nodes = [];
  const links = [];
  const tagMap = new Map();

  // ... fetching logic ...

For each item, I created a primary node. Then, I looked at its tags, category, or technologies. For every tag found, I created a tag node (if it didn't exist) and created a link between the item and the tag.

This automatically creates clusters. If five posts are tagged "React", they all link to the "React" tag node, pulling them together in the 3D space.

2. The 3D Component (KnowledgeGraphPage.js)

I used <ForceGraph3D> to render the data.

<ForceGraph3D
    ref={fgRef}
    graphData={graphData}
    backgroundColor="#050505"
    nodeLabel="name"
    nodeColor="color"
    onNodeClick={handleNodeClick}
    // ...
/>

3. Camera Controls

The "cool factor" comes from the camera movement. When you click a node, I didn't want a hard jump. I wanted a smooth flight.

  const handleNodeClick = useCallback((node) => {
    // Calculate a position slightly "outside" the node
    const distance = 40;
    const distRatio = 1 + distance/Math.hypot(node.x, node.y, node.z);

    if (fgRef.current) {
        fgRef.current.cameraPosition(
            { x: node.x * distRatio, y: node.y * distRatio, z: node.z * distRatio }, // new pos
            node, // lookAt
            3000  // ms duration
        );
    }
  }, []);

This calculates a vector from the center (0,0,0) to the node, extends it by a fixed distance, and moves the camera there while focusing on the node.

Challenges

  • PIML Parsing: My custom language parser needed to be robust enough to handle the varying structures of the projects.piml file.
  • Performance: Rendering hundreds of text labels in 3D can be heavy. I kept the UI minimal and only showing detailed info on hover.
  • Theme: Matching the "Brutalist/Cyberpunk" aesthetic required careful tuning of node colors (Emerald for Apps, Red for Posts) and link opacity.

The Result

The result is a living, breathing map of Fezcodex. It reveals patterns I didn't explicitly plan—like the massive cluster around "React" or the isolated islands of specific game experiments. It's not just a navigation tool; it's a piece of generative art powered by my own work.

Go check it out at /graph and fly through the system.

Read more...

]]>
<![CDATA[Routing Revolution: SSG, BrowserRouter, and the SEO Fix]]> https://fezcode.com/blog/routing-revolution-ssg-and-seo https://fezcode.com/blog/routing-revolution-ssg-and-seo Sun, 21 Dec 2025 00:00:00 GMT Routing Revolution: SSG, BrowserRouter, and the SEO Fix

For a long time, Fezcodex lived behind the "Hash Gap." If you looked at your address bar, you’d see that familiar /#/ slicing through every URL. While functional, this was the primary reason social media thumbnails were failing and search engines were only seeing the home page.

Today, I’ve completed a total migration to BrowserRouter combined with SSG (Static Site Generation). Here is the technical breakdown of why this was necessary and how it works.

The Problem: The Hash Black Hole

We originally used HashRouter because Fezcodex is hosted on GitHub Pages. Since GitHub Pages is a static file host, it doesn't know how to handle a request for /apps/markdown-table-formatter. It looks for a folder named apps and an index.html inside it. When it doesn't find them, it throws a 404.

HashRouter solved this by putting everything after the #. The server ignores the hash, always serves the root index.html, and React handles the rest.

The SEO Cost: Most crawlers (Twitter, Facebook, Discord) do not execute JavaScript and ignore the hash entirely. To them, every single link you shared looked like fezcode.com/—resulting in generic "Fezcodex - Personal Blog" thumbnails instead of page-specific content.

The Solution Part 1: BrowserRouter

I switched the core engine from HashRouter to BrowserRouter. This gives us "clean" URLs:

  • Old: fezcode.com/#/blog/my-post
  • New: fezcode.com/blog/my-post

But how do we make this work on a static host without a backend?

The Solution Part 2: react-snap & SSG

Enter Static Site Generation via react-snap.

Instead of shipping a nearly empty index.html and letting the browser build the page (Client-Side Rendering), we now build the pages during the deployment phase.

  1. The Crawl: During npm run build, react-snap fires up a headless browser (Puppeteer).
  2. The Snapshot: It visits every route defined in our sitemap and apps list.
  3. The Export: It captures the fully rendered HTML (including meta tags, titles, and unique descriptions) and saves it as a physical index.html file in a matching folder structure.

In our latest build, this generated 281 unique HTML files. Now, when you share a link, the crawler sees a real, static HTML file with the correct Open Graph tags immediately.

The Solution Part 3: Hydration

Once the browser loads the static HTML, we don't want to lose the interactivity of React. I updated src/index.js to use ReactDOM.hydrateRoot.

This process, known as Hydration, allows React to "attach" to the existing HTML already on the screen rather than re-rendering everything from scratch. It preserves the fast initial load of a static site with the power of a modern web app.

Global Content Cleanup

Switching the router was only half the battle. Thousands of internal links within our .piml logs and .txt blog posts still pointed to the old /#/ structure.

I executed a global recursive replacement across the public/ directory:

Get-ChildItem -Path public -Include *.json, *.txt, *.piml, *.md -Recurse | 
ForEach-Object { (Get-Content $_.FullName) -replace '/#/', '/' | Set-Content $_.FullName }

This ensured that the entire ecosystem—from the timeline to the project descriptions—is now synchronized with the new routing architecture.

Conclusion

Fezcodex is no longer just a Single Page Application; it is a high-performance, SEO-optimized static engine. Clean URLs, unique thumbnails, and faster perceived load times are now the standard.

Read more...

]]>
<![CDATA[The Art of the Algorithm: Generative Visuals in Fezcodex]]> https://fezcode.com/blog/art-generation-in-fezcodex https://fezcode.com/blog/art-generation-in-fezcodex Sat, 20 Dec 2025 00:00:00 GMT Art in the digital age isn't just about pixels and brushes; it's about rules, logic, and mathematics. In Fezcodex, I've integrated several "visual experiments" that turn code into aesthetics. From the subtle backgrounds of cards to full-blown design laboratories, let's explore how these algorithms work.

The Heart: GenerativeArt

You might have noticed unique geometric patterns appearing behind various elements in the site. These are powered by the GenerativeArt component.

Seeded Randomness

Imagine you have a robot that can draw. If you tell the robot "draw something random," it might draw a circle today and a square tomorrow. But what if you want the robot to draw the same "random" thing every time you say the word "Apple"?

That is what a Seed does. The algorithm takes a word (like a project name or a date), turns it into a number, and uses that number to make every "random" choice. Because the starting number is the same, the result is always the same. This is why "Project A" always has its own unique, permanent visual identity.

How it generates symbols

The component uses a "Bauhaus Grid" logic to create symbols. Here is the step-by-step:

  1. The Grid: It divides a square into a 5x5 grid.
  2. The Coin Flip: For each square in the grid, it flips a digital coin to decide if it should draw something there.
  3. The Shape: If it decides to draw, it picks one of four shapes: a square, a circle, a quarter-circle, or a triangle.
  4. The Twist: It rotates the shape by 0, 90, 180, or 270 degrees.
  5. The Color: It picks colors from a palette generated based on the same seed.

By combining these simple rules, the algorithm creates complex, balanced symbols that look like modern art but are just math in disguise.

The Laboratory: BlendLab

While GenerativeArt is about sharp geometry, BlendLab is about atmosphere and "vibe." It uses a coordinate-based system to create abstract color fields.

In BlendLab, you position different "entities" (points of color) on a digital canvas. The algorithm then applies heavy Gaussian blurs and noise filters. This blends the distinct points into a smooth, flowing gradient. When combined with high-impact typography, it creates a style often seen in modern "Brutalist" design.

The Creative Suite

Beyond these two, Fezcodex houses several other specialized art generators:

  • Topographic Maps: Uses "Perlin Noise" (a type of smooth, natural-looking randomness) to create height maps. By drawing lines at specific height levels, it creates the look of a physical map.
  • Abstract Waves: Uses Trigonometry (Sine and Cosine waves). By layering multiple waves with slight offsets and adding a "noise" distortion, it generates 3D-looking landscapes reminiscent of retro album covers.
  • Fractal Flora: Uses "Recursion"—a function that calls itself. To draw a tree, the code draws a trunk, then tells itself to "draw two smaller branches at the end," and repeats this until a full, organic-looking tree is formed.
  • Spirograph: Uses the classic math of hypotrochoids and epitrochoids. It tracks the path of a point on a circle rolling inside or outside another circle.

Conclusion

Code is often seen as cold and rigid, but when we introduce randomness and recursion, it becomes a brush. Fezcodex is a sandbox for these experiments, proving that the pursuit of code can indeed be an art form.

Read more...

]]>
<![CDATA[Brutalist Fezcodex: The Big Cleanup]]> https://fezcode.com/blog/brutalist-refactor https://fezcode.com/blog/brutalist-refactor Fri, 19 Dec 2025 00:00:00 GMT Today was a huge day for Fezcodex. We did a "Major Refactor."

What did we change?

  1. Brutalist Style: We made the site look bold and strong, like a high-tech terminal. Big letters, sharp edges, and high contrast.
  2. Generative Art: We added "math art" that draws itself in the background. It's unique every time you look at it!
  3. Timeline & Games: We updated the Timeline and the Memory Game to fit this cool new look.
  4. The "Under the Hood" Stuff: We cleaned up the code (the "linter" stuff). We removed unused pieces and fixed tiny mistakes that make the computer happy.

The garden is now cleaner, faster, and much more "Brutalist."

Enjoy the new vibe!

Log Entry: 2025-12-19

Read more...

]]>
<![CDATA[Introducing New Reading Modes: Dossier and Terminal!]]> https://fezcode.com/blog/introducing-reading-experience https://fezcode.com/blog/introducing-reading-experience Wed, 17 Dec 2025 00:00:00 GMT Excited to unveil two brand new ways to experience content on Fezcodex Blogposts: Dossier Mode and Terminal Mode! These unique reading modes are designed to offer a fresh and engaging perspective, allowing you to tailor your browsing experience to your personal style.

What's New?

1. Dossier Mode

Step into the role of an investigator with Dossier Mode. This mode transforms the blogpost's interface into a sleek, document-style layout, reminiscent of classified files and confidential reports. It's perfect for those who appreciate a clean, minimalist aesthetic and want to immerse themselves in content without distractions, feeling like they're poring over important case files.

2. Terminal Mode

For the tech enthusiasts and command-line aficionados, we present Terminal Mode. This mode re-skins blogposts with a retro, monospaced font, glowing green text, and a classic command-line interface feel. It's an homage to the early days of computing, offering a nostalgic and functional environment that's ideal for developers, hackers, or anyone who enjoys a vintage digital vibe while consuming content.

Inspired by Fallout: New Vegas colors

Why build this?

The goal is to continually innovate and provide diverse ways for our users to interact with our content. I believe that offering distinct visual experiences like Dossier and Terminal modes enhances user engagement and allows for a more personalized journey through Fezcodex.

Head over to the Settings page (accessible from the Sidebar). Scroll down to the new Reading Experience section and set you mode...

Oh, One More Thing. Sidebar Colors

Also sidebar now supports multiple background colors. Some of your favorite even.

  • Salmon Light
  • Salmon Medium
  • Blue
  • Green
  • Purple
  • Cyan

Head over to the Settings page, again. Scroll down to the new Interface & Layout then under Sidebar Color section set your sidebar color.

Hope you enjoy exploring these new immersive reading modes. Happy reading!

Read more...

]]>
<![CDATA[Typeface vs. Font: The Music Analogy]]> https://fezcode.com/blog/typeface-vs-font https://fezcode.com/blog/typeface-vs-font Wed, 17 Dec 2025 00:00:00 GMT The easiest way to understand the difference is to think about music:

  • The Typeface is the song itself (the melody, the lyrics, the creative idea).
  • The Font is the MP3 file (the actual digital file you use to play the music).

In design terms:

  • Typeface: The specific design or look of the letters (what you see).
  • Font: The computer file or mechanism that contains the letters (what you use).

In Practice

You choose a Typeface, but you install a Font.

Examples

1. Helvetica

  • Typeface: "Helvetica" (The entire family of letters).
  • Font: Helvetica-Bold.otf (The specific file for the bold version).

2. Times New Roman

  • Typeface: "Times New Roman" (The creative design).
  • Font: Times New Roman, Italic, 12 point (The specific variation you are using on the page).

Summary

If you are talking to a designer about the look, you are talking about a Typeface. If you are talking to a developer about the file or the bold setting, you are talking about a Font.

Read more...

]]>
<![CDATA[Why Your Brain Hates Lyrics While You Work: The Irrelevant Speech Effect]]> https://fezcode.com/blog/the-irrelevant-speech-effect https://fezcode.com/blog/the-irrelevant-speech-effect Wed, 17 Dec 2025 00:00:00 GMT Have you ever tried to read a book while someone next to you is having a loud conversation on the phone? You probably found yourself reading the same sentence three times without understanding a word.

This isn't because you aren't focused; it is because of a psychological glitch called the Irrelevant Speech Effect (ISE).

Here is the simple breakdown of why this happens and why your favorite playlist might be killing your productivity.

What is the "Irrelevant Speech Effect"?

Imagine your brain’s working memory is like a single-lane bridge. This bridge is responsible for processing language—whether that's reading an email, writing code, or studying for an exam.

When you are working, you are sending "cars" (words and thoughts) over this bridge.

  • Silence: The cars move smoothy.
  • Instrumental Music: A little scenery on the side of the road, but the traffic flows.
  • Speech (or Lyrics): This is like a massive truck trying to force its way onto that same single-lane bridge from the opposite direction.

Even if you try to ignore the speech, you can't. Your brain is hardwired to prioritize human voices. It involuntarily tries to process the words it hears, causing a traffic jam on the bridge. This crash is the Irrelevant Speech Effect. Why You Shouldn't Listen to Lyrical Music While Working

You might think, "I'm not listening to the lyrics, I'm just vibing." Unfortunately, your subconscious disagrees.

If your task involves words (reading, writing, coding, planning), your brain uses a system called the Phonological Loop. This is the inner voice you hear when you read silently.

When you play music with lyrics:

  • Conflict: Your inner voice (reading/thinking) starts fighting with the singer's voice.
  • Processing Power: Your brain wastes energy trying to filter out the singer's words to focus on your own thoughts.
  • Result: Your IQ temporarily drops, you make more mistakes, and you get tired faster.

Real-Life Examples

  • The Coffee Shop Dilemma: You can work fine in a coffee shop with the hum of a machine or clinking cups (white noise). But the moment the couple at the next table starts arguing about their relationship, your focus shatters.-
  • The Open Office: You are trying to write an important email, but a colleague two desks away is explaining a recipe. Suddenly, you find yourself typing "pasta" into your professional report.-
  • TV in the Background: You think having the news on helps you relax while studying, but you realize you’ve been staring at the same page for 20 minutes because your brain is tracking the reporter's voice.

The Solution?

If you are doing manual labor (like washing dishes), lyrical music is great! It keeps you energized.

But for deep mental work:

  • Stick to Lo-Fi beats, Classical, or Video Game Soundtracks.
  • These genres have no words, so they occupy the "emotional" part of your brain without crashing into the "language" bridge.

Read more...

]]>
<![CDATA[React Hooks Showdown: useMemo vs useCallback vs useState vs useEffect]]> https://fezcode.com/blog/react-hooks-comparison https://fezcode.com/blog/react-hooks-comparison Mon, 15 Dec 2025 00:00:00 GMT 1. useState: The Memory

What it does: Allows a functional component to "remember" information between renders.

When to use: Whenever you have data that changes over time and needs to trigger a re-render to update the UI (e.g., form inputs, toggle states, counters).

const [count, setCount] = useState(0);

// Update state
setCount(count + 1);

2. useEffect: The Side Effect

What it does: Performs side effects in functional components. "Side effects" are things like data fetching, subscriptions, or manually changing the DOM.

When to use: When you need to do something after the component renders or when a specific value changes.

useEffect(() => {
  // This runs after every render
  document.title = `You clicked ${count} times`;

  // Optional cleanup mechanism
  return () => {
    // Clean up code here
  };
}, [count]); // Only re-run if 'count' changes

3. useMemo: The Calculator

What it does: Memoizes (caches) the result of a calculation. It only re-calculates the value when one of its dependencies changes.

When to use: Optimization. Use it to avoid expensive calculations on every render.

const expensiveValue = useMemo(() => {
  return computeExpensiveValue(a, b);
}, [a, b]); // Only re-compute if 'a' or 'b' changes

Note: Don't overuse this. Memoization has its own cost.

4. useCallback: The Function Saver

What it does: Memoizes a function definition. It returns the same function instance between renders unless its dependencies change.

When to use: Optimization. Primarily useful when passing callbacks to optimized child components (like those wrapped in React.memo) to prevent unnecessary re-renders of the child.

const handleClick = useCallback(() => {
  doSomething(a, b);
}, [a, b]); // Function identity remains stable unless 'a' or 'b' changes

Summary Table

Hook Returns Purpose Re-runs when...
useState [state, setter] Manage state Setter is called
useEffect undefined Side effects Dependencies change
useMemo Calculated Value Cache expensive calculation Dependencies change
useCallback Memoized Function Stable function identity Dependencies change

Key Difference: useMemo vs useCallback

  • useMemo caches the result of a function call.
  • useCallback caches the function itself.

useCallback(fn, deps) is equivalent to useMemo(() => fn, deps).

Real World Example: Language Switching

A common question is: "Why use useEffect for fetching data when useState holds it?"

Let's say we have a Language Switcher (EN/TR).

The Wrong Way (Trying to use useState for fetching)

// This won't work because fetch is async and returns a Promise, not the data immediately.
const [books] = useState(fetch(`/stories/books_${language}.piml`));

The Right Way (useEffect + useState)

  1. State holds the result (the parsed books).
  2. Effect handles the process (fetching when language changes).
const { language } = useContext(DndContext); // "en" or "tr"
const [books, setBooks] = useState([]); // Holds the data

// Run this side effect whenever 'language' changes
useEffect(() => {
  const fetchData = async () => {
    // 1. Fetch the file based on the dynamic language variable
    const response = await fetch(`/stories/books_${language}.piml`);
    
    // 2. Parse the result
    const text = await response.text();
    const data = parsePiml(text);
    
    // 3. Update state (triggers re-render)
    setBooks(data.books);
  };

  fetchData();
}, [language]); // <--- The dependency that triggers the re-fetch

This pattern ensures that every time the user clicks "TR", the effect re-runs, fetches the Turkish content, updates the state, and the UI refreshes automatically.

Read more...

]]>
<![CDATA[React Magic: Rendering Components from Markdown Links]]> https://fezcode.com/blog/react-magic-markdown-components https://fezcode.com/blog/react-magic-markdown-components Fri, 12 Dec 2025 00:00:00 GMT Static text is boring. In a modern React application, your content should be alive.

Today I want to share a fun pattern I implemented in Fezcodex: triggering dynamic UI interactions directly from standard Markdown links. Specifically, clicking a link in a blog post to open a side panel with a live React component, rather than navigating to a new page.

The Idea

I wanted to explain technical terms like Prop Drilling without forcing the reader to leave the article. A tooltip is too small; a new tab is too distracting. The solution? My global Side Panel.

But how do you tell a static Markdown file to "render a React component in the side panel"?

The Solution

The secret sauce lies in react-markdown's ability to customize how HTML elements are rendered. We can intercept every <a> tag and check if it's a "special" link.

1. The Interceptor (MarkdownLink)

I created a custom component that replaces standard HTML anchors. It checks the href for a specific pattern (in my case, /vocab/).

const MarkdownLink = ({ href, children }) => {
  const { openSidePanel } = useSidePanel();

  // Check if this is a "vocabulary" link
  const isVocab = href && href.includes('/vocab/');

  if (isVocab) {
    // 1. Extract the term ID (e.g., "prop-drilling")
    const term = href.split('/vocab/')[1];

    // 2. Look up the definition/component
    const definition = vocabulary[term];

    return (
      <a
        href={href}
        onClick={(e) => {
          e.preventDefault(); // Stop navigation!
          if (definition) {
            // 3. Trigger the global UI
            openSidePanel(definition.title, definition.content);
          }
        }}
        className="text-pink-400 dashed-underline cursor-help"
      >
        {children}
      </a>
    );
  }

  // Fallback for normal links
  return <a href={href}>{children}</a>;
};

2. The Data (vocabulary.js)

I store the actual content in a simple lookup object. The beauty is that content can be anything--text, images, or fully interactive React components.

export const vocabulary = {
  'prop-drilling': {
    title: 'Prop Drilling',
    content: <PropDrillingDiagram /> // A real component!
  },
  // ...
};

3. Handling "Deep Links"

What if someone actually copies the URL https://fezcodex.com/vocab/prop-drilling and sends it to a friend? The onClick handler won't fire because they aren't clicking a link—they are loading the app.

To handle this, I added a "phantom" route in my Router:

// VocabRouteHandler.js
const VocabRouteHandler = () => {
  const { term } = useParams();
  const navigate = useNavigate();
  const { openSidePanel } = useSidePanel();

  useEffect(() => {
    // 1. Open the panel immediately
    if (vocabulary[term]) {
      openSidePanel(vocabulary[term].title, vocabulary[term].content);
    }
    // 2. Redirect to home (so the background isn't blank)
    navigate('/', { replace: true });
  }, [term]);

  return null;
};

Why this rocks

This pattern effectively turns your static Markdown content into a control surface for your application. You can write:

"Check out this [interactive demo](/demos/sorting-algo)..."

And have it launch a full-screen visualization, a game, or a configuration wizard, all without leaving the flow of your writing. It bridges the gap between "content" and "app".

Read more...

]]>
<![CDATA[Implementing a Resizable Global Sliding Side Panel in React]]> https://fezcode.com/blog/implementing-a-sliding-side-panel https://fezcode.com/blog/implementing-a-sliding-side-panel Thu, 11 Dec 2025 00:00:00 GMT Sometimes, a modal is just too intrusive. You want to show detailed context—like a complex rating system or metadata—without forcing the user to lose their place on the page or blocking the entire UI with a backdrop that demands immediate attention. Enter the Sliding Side Panel.

In this post, I'll walk through how I implemented a global side panel system for Fezcodex, allowing any component in the app to trigger a content-rich overlay that slides in smoothly from the right. Even better? I made it resizable, so users can drag to expand the view if they need more space.

The Goal

The immediate need was simple: I wanted to explain my G4-inspired 5-star rating system on the Logs page. A simple tooltip wasn't enough, and a full modal felt heavy-handed. I wanted a panel that felt like an extension of the UI, sliding in to offer "more details" on demand.

The Architecture

To make this truly reusable, I avoided prop drilling by using the Context API.

Why Context? Avoiding Prop Drilling

Without a global context, implementing a feature like this would require prop drilling. This is a common pattern (or anti-pattern) in React where you pass data or functions down through multiple layers of components just to get them to where they are needed.

Imagine we managed the side panel state in App.js. We would have to pass the openSidePanel function like this:

AppLayoutMainContentLogsPageLogCardInfoButton

Every intermediate component would need to accept and pass along a prop it doesn't even use. This makes refactoring a nightmare and clutters your component signatures. By using the Context API, we can bypass the middle layers entirely. Any component, no matter how deep in the tree, can simply reach out and grab the openSidePanel function directly.

1. The Context (SidePanelContext)

We need a way to tell the app: "Open the panel with this title, this content, and start at this width."

// src/context/SidePanelContext.js
import React, { createContext, useContext, useState } from 'react';

const SidePanelContext = createContext();

export const useSidePanel = () => useContext(SidePanelContext);

export const SidePanelProvider = ({ children }) => {
  const [isOpen, setIsOpen] = useState(false);
  const [panelContent, setPanelContent] = useState(null);
  const [panelTitle, setPanelTitle] = useState('');
  const [panelWidth, setPanelWidth] = useState(450); // Default width

  // openSidePanel now accepts an optional initial width
  const openSidePanel = (title, content, width = 450) => {
    setPanelTitle(title);
    setPanelContent(content);
    setPanelWidth(width);
    setIsOpen(true);
  };

  const closeSidePanel = () => setIsOpen(false);

  return (
    <SidePanelContext.Provider
      value={{
        isOpen,
        panelTitle,
        panelContent,
        panelWidth,
        setPanelWidth,
        openSidePanel,
        closeSidePanel
      }}
    >
      {children}
    </SidePanelContext.Provider>
  );
};

This allows any component to call openSidePanel('My Title', <MyComponent />, 600) to trigger the UI with a custom starting width.

2. The Component (SidePanel)

The visual component uses Framer Motion for silky smooth entrance and exit animations, and vanilla JS event listeners for the resize logic.

// src/components/SidePanel.js
import { motion, AnimatePresence } from 'framer-motion';
import { useState, useEffect } from 'react';
import { useSidePanel } from '../context/SidePanelContext';

const SidePanel = () => {
  const { isOpen, closeSidePanel, panelTitle, panelContent, panelWidth, setPanelWidth } = useSidePanel();
  const [isResizing, setIsResizing] = useState(false);

  // Resize Logic
  useEffect(() => {
    const handleMouseMove = (e) => {
      if (!isResizing) return;
      const newWidth = window.innerWidth - e.clientX;
      // Constrain width: min 300px, max 90% of screen
      if (newWidth > 300 && newWidth < window.innerWidth * 0.9) {
        setPanelWidth(newWidth);
      }
    };

    const handleMouseUp = () => setIsResizing(false);

    if (isResizing) {
      window.addEventListener('mousemove', handleMouseMove);
      window.addEventListener('mouseup', handleMouseUp);
      document.body.style.cursor = 'ew-resize';
      document.body.style.userSelect = 'none'; // Prevent text selection while dragging
    }

    return () => {
      window.removeEventListener('mousemove', handleMouseMove);
      window.removeEventListener('mouseup', handleMouseUp);
      document.body.style.cursor = 'default';
      document.body.style.userSelect = 'auto';
    };
  }, [isResizing, setPanelWidth]);

  return (
    <AnimatePresence>
      {isOpen && (
        <>
          <motion.div onClick={closeSidePanel} className="fixed inset-0 bg-black/50 z-[60]" />

          <motion.div
            initial={{ x: '100%' }}
            animate={{ x: 0 }}
            exit={{ x: '100%' }}
            transition={{ type: 'spring', damping: 25, stiffness: 200 }}
            style={{ width: panelWidth }} // Dynamic width
            className="fixed top-0 right-0 h-full bg-gray-900 border-l border-gray-700 z-[70] flex flex-col"
          >
            {/* Resize Handle */}
            <div
              onMouseDown={(e) => { setIsResizing(true); e.preventDefault(); }}
              className="absolute left-0 top-0 bottom-0 w-1.5 cursor-ew-resize hover:bg-primary-500/50 transition-colors z-50"
            />

             {/* Header & Content */}
          </motion.div>
        </>
      )}
    </AnimatePresence>
  );
};

3. Integration

I wrapped the entire application in the SidePanelProvider in App.js and placed the <SidePanel /> component in Layout.js. This ensures the panel is always available and renders on top of everything else.

Inspiration

The first use case for this panel was to detail the Rating System for my logs. I wanted to pay homage to the classic X-Play (G4TV) scale, emphasizing that a 3 out of 5 is a solid, good score—not a failure.

The side panel proved perfect for this: users can check the rating criteria without leaving the logs list, keeping their browsing flow uninterrupted.

Conclusion

Global UI elements controlled via Context are a powerful pattern in React. By adding a simple resize handle and managing width in the global state, we've transformed a static overlay into a flexible, user-friendly tool that adapts to the user's needs.

Read more...

]]>
<![CDATA[Building a Digital Rotary Phone]]> https://fezcode.com/blog/building-a-digital-rotary-phone https://fezcode.com/blog/building-a-digital-rotary-phone Tue, 02 Dec 2025 00:00:00 GMT In a world of touchscreens and haptic feedback, there's something deeply satisfying about the mechanical click-whirrr of a rotary phone. I recently built a digital version of this retro interface for Fezcodex, and I want to take you through the engineering journey—from the trigonometry of the dial to the state management of the call logic.

The Challenge

Building a rotary phone for the web isn't just about displaying an image. It's about capturing the feel of the interaction. You need to:

  1. Draw a dial with holes.
  2. Detect user input (mouse or touch).
  3. Calculate the rotation based on the pointer's position.
  4. "Drag" the dial realistically.
  5. Snap back when released.
  6. Register the dialed number only if the user drags far enough.

Anatomy of the Dial

I broke the RotaryDial component into a few key layers, stacked using CSS absolute positioning:

  1. The Backplate: This is static. It sits at the bottom and holds the numbers (1, 2, 3... 0) in their correct positions.
  2. The Rotating Disk: This sits on top of the backplate. It rotates based on user interaction. It contains the "holes".
  3. The Finger Stop: A static hook at the bottom right (approx 4 o'clock position) that physically stops the dial on a real phone.

The Trigonometry of Angles

The core of this component is converting a mouse position (x, y) into an angle (theta).

const getAngle = (event, center) => {
  const clientX = event.touches ? event.touches[0].clientX : event.clientX;
  const clientY = event.touches ? event.touches[0].clientY : event.clientY;

  const dx = clientX - center.x;
  const dy = clientY - center.y;
  // atan2 returns angle in radians, convert to degrees
  let theta = Math.atan2(dy, dx) * (180 / Math.PI);
  return theta;
};

Math.atan2(dy, dx) is perfect here because it handles all quadrants correctly, returning values from -PI to +PI (-180 to +180 degrees).

Why Math.atan2?

You might remember SOH CAH TOA from school. To find an angle given x and y, we typically use the tangent function: tan(θ) = y / x, so θ = atan(y / x).

However, Math.atan() has a fatal flaw for UI interaction: it can't distinguish between quadrants.

  • Quadrant 1: x=1, y=1 -> atan(1/1) = 45°
  • Quadrant 3: x=-1, y=-1 -> atan(-1/-1) = atan(1) = 45°

If we used atan, dragging in the bottom-left would behave exactly like dragging in the top-right!

Math.atan2(y, x) solves this by taking both coordinates separately. It checks the signs of x and y to place the angle in the correct full-circle context (-π to +π radians).

We then convert this radian value to degrees: Degrees = Radians * (180 / π)

This gives us a continuous value we can use to map the mouse position directly to the dial's rotation.

The Drag Logic

When a user clicks a specific number's hole, we don't just start rotating from 0. We need to know which hole they grabbed.

Each digit has a "Resting Angle". If the Finger Stop is at 60 degrees, and the holes are spaced 30 degrees apart:

  • Digit 1 is at 60 - 30 = 30 degrees.
  • Digit 2 is at 60 - 60 = 0 degrees.
  • ...and so on.

When the user starts dragging, we track the mouse's current angle relative to the center of the dial. The rotation of the dial is then calculated as:

Rotation = CurrentMouseAngle - InitialHoleAngle

Handling the "Wrap Around"

One of the trickiest parts was handling the boundary where angles jump from 180 to -180. For numbers like 9 and 0, the rotation requires dragging past this boundary.

If you just subtract the angles, you might get a jump like 179 -> -179, which looks like a massive reverse rotation. I solved this with a normalization function:

const normalizeDiff = (diff) => {
  while (diff <= -180) diff += 360;
  while (diff > 180) diff -= 360;
  return diff;
};

However, simply normalizing isn't enough for the long throws (like dragging '0' all the way around). A normalized angle might look like -60 degrees, but we actually mean 300 degrees of positive rotation.

I added logic to detect this "underflow":

// If rotation is negative but adding 360 keeps it within valid range
if (newRotation < 0 && (newRotation + 360) <= maxRot + 30) {
  newRotation += 360;
}

This ensures that dragging '0' feels continuous, even as it passes the 6 o'clock mark.

State Management vs. Animation

Initially, I used standard React state (useState) for the rotation. This worked, but setState is asynchronous and can feel slightly laggy for high-frequency drag events (60fps).

I switched to Framer Motion's useMotionValue. This allows us to update the rotation value directly without triggering a full React re-render on every pixel of movement. It's buttery smooth.

const rotation = useMotionValue(0);
// ...
rotation.set(newRotation);

When the user releases the dial (handleEnd), we need it to spring back to zero. Framer Motion makes this trivial:

animate(rotation, 0, {
  type: "spring",
  stiffness: 200,
  damping: 20
});

The "Call" Logic

The drag logic only handles the visual rotation. To actually dial a number, we check the final rotation when the user releases the mouse.

If abs(CurrentRotation - MaxRotation) < Threshold, we count it as a successful dial.

I connected this to a higher-level RotaryPhonePage component that maintains the string of dialed numbers.

Easter Eggs

Of course, no app is complete without secrets. I hooked up a handleCall function that checks specific number patterns:

  • 911: Triggers a red "Connected" state and unlocks "The Emergency" achievement.
  • 0: Connects to the Operator.
  • Others: Just simulates a call.

Visuals

The dial uses Tailwind CSS for styling. The numbers and holes are positioned using transform: rotate(...) translate(...).

  • rotate(angle) points the element in the right direction.
  • translate(radius) pushes it out from the center.
  • rotate(-angle) (on the inner text) keeps the numbers upright!

The result is a responsive, interactive, and nostalgic component that was a joy to build. Give it a spin in the Apps section!

Read more...

]]>
<![CDATA[Nocturnote: A Sleek and Modern Text Editor]]> https://fezcode.com/blog/nocturnote https://fezcode.com/blog/nocturnote Mon, 01 Dec 2025 00:00:00 GMT Nocturnote: The Text Editor I Always Wanted

Have you ever felt like your text editor is either doing too much or too little? That's exactly how I felt before I started building Nocturnote.

Nocturnote Notepad Mode

Notepad Mode in Nocturnote

Nocturnote is my take on a modern, distraction-free writing environment. It's a sleek, cross-platform desktop application designed for those who want to just write, but with the comfort of modern tools.

Why Nocturnote?

I wanted something that looked good, felt fast, and offered just the right amount of customization without being overwhelming.

Key Features

  • Distraction-Free Interface: Clean lines, subtle colors, and a focus on your text.
  • Rain Mode: This is one of my favorites. Toggle it on for a soothing visual effect that adds a cozy atmosphere to your writing sessions.
  • Notepad Mode: Sometimes you just want that classic, stripped-back aesthetic. Nocturnote has you covered.
  • Full Customization: Change fonts, sizes, line heights, and more. Make it yours.

Under the Hood

For the tech-savvy, Nocturnote is built using a robust modern stack:

  • Electron: Ensuring it runs smoothly on Windows, macOS, and Linux.
  • Svelte 5: For a blazing fast and reactive user interface.
  • TypeScript: Because type safety is non-negotiable.
  • Tailwind CSS: For rapid and beautiful styling.
  • Electron-Vite: For a lightning-fast development experience.

Get It

Nocturnote is open source! You can check out the code, contribute, or download it from the repository.

Check out Nocturnote on GitHub

Whether you're coding, journaling, or taking quick notes, I hope Nocturnote provides the calm, productive space you need.

Read more...

]]>
<![CDATA[The Art of Recursive Botany: How Fractal Flora Works]]> https://fezcode.com/blog/how-fractal-flora-works https://fezcode.com/blog/how-fractal-flora-works Fri, 28 Nov 2025 00:00:00 GMT Have you ever wondered how nature creates such intricate and beautiful patterns, from the branching of trees to the delicate veins of a leaf? Much of this complexity can be explained by surprisingly simple rules, often involving fractals and recursion. Our new "Fractal Flora" app lets you explore these principles by growing your own digital trees with a few sliders.

Try apps::flora here

This post will peel back the layers and explain the core mechanics behind the app.

/images/projects/fractal-flora.png

What is a Fractal Tree?

At its heart, a fractal tree is a structure where a basic branching pattern repeats itself at smaller scales. Each branch can be thought of as a miniature version of the entire tree. This self-similarity is a hallmark of fractals.

In programming, this concept is perfectly suited for recursion, where a function calls itself to solve smaller instances of the same problem.

The Recursive Algorithm: drawBranch

The entire tree is generated by a single, powerful recursive function, let's call it branch(). It takes a starting point, a length, an angle, and its current depth in the tree.

Here's a simplified look at its logic:

  1. Draw the Current Branch: It draws a line segment from its starting point, at its given length and angle.
  2. Base Case (Stop Condition): If the depth (how many times it has branched) reaches zero, it stops. This prevents infinite recursion.
  3. Branch Out: Otherwise, it calculates the endpoint of the current branch. From this endpoint, it calls itself twice (or more), creating two new "sub-branches." Each sub-branch is drawn with a shorter length, a new angle (offset from the parent branch), and a reduced depth.
const branch = (x, y, len, ang, d) => {
  // 1. Calculate end point of current branch
  const endX = x + len * Math.cos((ang * Math.PI) / 180);
  const endY = y + len * Math.sin((ang * Math.PI) / 180);

  // 2. Draw the branch (context.drawLine(x,y,endX,endY))

  // 3. If not at max depth, recurse
  if (d > 0) {
    const nextLen = len * lengthMultiplier; // e.g., 0.7
    // Right branch
    branch(endX, endY, nextLen, ang + branchAngle, d - 1);
    // Left branch
    branch(endX, endY, nextLen, ang - branchAngle, d - 1);
  }
};

// Initial call (e.g., from bottom center of canvas)
// branch(canvas.width / 2, canvas.height, initialLength, -90, maxDepth);

(Note: The actual implementation in FractalFloraPage.js is slightly more complex, handling canvas transformations, line widths, and randomized elements.)

The "DNA" of Your Digital Tree

The beauty of Fractal Flora lies in how these simple parameters (the tree's "DNA") dramatically change its appearance:

  • Recursion Depth (depth): This controls how many times the branch() function calls itself. A higher depth creates a denser, more complex tree, but also requires more computation.
  • Branch Angle (angle): This is the angle at which new branches diverge from the parent branch. Small angles create tall, narrow trees, while larger angles create wider, more sprawling structures.
  • Length Multiplier (lengthMultiplier): This determines how much shorter each successive branch becomes. A value of 0.7 means a new branch is 70% the length of its parent.
  • Trunk Base Size (lengthBase): The initial length of the very first (main) trunk segment.
  • Wind / Asymmetry (asymmetry): This parameter adds a bias to the branching angle, making one side of the tree grow more dominantly, simulating the effect of wind or environmental factors.
  • Organic Randomness (randomness): This introduces slight, random variations to the length and angle of each branch, breaking the perfect symmetry of mathematical fractals and making the tree appear more organic and natural.

Seasons and Color Palettes

The app also cycles through different "seasons." These aren't complex simulations, but rather pre-defined color palettes for the trunk, branches, and leaves, instantly changing the mood and appearance of your flora.

From Math to Art

What's fascinating is how a few lines of code, driven by recursive mathematical principles, can generate forms that closely mimic those found in nature. Fractals are not just abstract mathematical concepts; they are the language of growth, efficiency, and beauty in the natural world.

Now that you understand the "how," dive back into the Fractal Flora app and become a digital botanist, experimenting with its DNA to create your own unique, algorithmic arboretum!

Read more...

]]>
<![CDATA[Unlocking Your Journey: Introducing the Fezcodex Achievement System!]]> https://fezcode.com/blog/the-fezcodex-achievement-system https://fezcode.com/blog/the-fezcodex-achievement-system Fri, 28 Nov 2025 00:00:00 GMT Here at Fezcodex, we believe exploration should be rewarded. That's why we're thrilled to unveil the brand-new Achievement System – a fun and engaging way to discover all the hidden corners and cool features of the site!

What is it?

The Achievement System gamifies your experience on Fezcodex. As you navigate, interact with our apps, explore visual modes, or simply read through our content, you'll be secretly unlocking various badges and trophies. Think of it as a personalized quest log for your journey through the digital world of Fezcodex!

Why Achievements?

We wanted to make exploring the site more interactive and rewarding. With achievements, you can:

  • Discover Hidden Gems: Uncover features you might not have found otherwise.
  • Track Your Progress: See how much of Fezcodex you've truly experienced.
  • Engage More: Turn casual browsing into a rewarding adventure.

How it Works

The system operates quietly in the background, tracking specific actions:

  1. Triggers: Certain interactions, like opening the Command Palette, enabling a unique visual mode, or visiting a specific page, act as triggers.
  2. Local Storage: Your progress is saved securely and anonymously in your browser's local storage. No data is sent to any server – your achievements are yours alone!
  3. Toast Notifications: When you unlock a new achievement, a subtle (but celebratory!) toast notification will appear to let you know.

A Glimpse at Some Achievements

Here are just a few examples of the achievements you can strive for:

  • Hello World: The very first step on your Fezcodex journey.
  • The Hacker: For those who master the Command Palette.
  • Curious Soul: For taking the time to learn more about the creator.
  • The Architect: For appreciating the structural beauty of certain visual modes.
  • Retro Futurist: For embracing the aesthetics of a bygone era.
  • Novice Reader, Avid Reader, Bookworm: For delving into our blog posts and expanding your knowledge!

Visit the Trophy Room!

Want to see your collection? Head over to the new Trophy Room page (accessible via the sidebar) to view all the achievements you've unlocked and see what challenges still await!

We hope this new feature adds an extra layer of fun and discovery to your Fezcodex experience. Happy hunting!

Read more...

]]>
<![CDATA[Unlocking the Multiverse: New Visual Modes in Fezcodex]]> https://fezcode.com/blog/visual-modes-easter-eggs https://fezcode.com/blog/visual-modes-easter-eggs Thu, 27 Nov 2025 00:00:00 GMT Websites should be fun. While optimizing performance and fixing bugs is satisfying, sometimes you just want to flip the table -or in this case-, the entire viewport.

Today, I'm excited to introduce a suite of new Visual Modes to Fezcodex. These are persistent, purely aesthetic toggles that let you experience the site in a completely different light (or lack thereof).

The New Modes

1. Invert Colors (The Upside Down)

Ever wondered what the site looks like in negative? This mode inverts all colors but cleverly rotates the hue by 180 degrees. This prevents photos from looking like scary X-rays and instead creates a cool, alternative color palette.

2. Retro Mode (Cyberpunk 2077)

Feeling nostalgic? Enable Retro Mode to overlay a CRT scanline effect and chromatic aberration (that red/blue text split). It gives the entire UI a gritty, 80s sci-fi terminal vibe.

3. Party Mode (RGB Everywhere)

Boots and cats and boots and cats. This mode continuously cycles the screen's hue through the entire rainbow. Warning: It's colorful. Very colorful.

4. Mirror Mode (Through the Looking Glass)

For those who want a challenge. This flips the entire website horizontally. Text is backwards, layouts are reversed, and your mouse muscle memory will be thoroughly confused. Good luck navigating!

5. Noir Mode (Dramatic Effect)

It was a dark and stormy night... This mode applies a high-contrast grayscale filter, turning the site into a scene from a classic detective film.

6. Terminal Mode (The Hacker)

Jack in. This mode transforms the entire UI into a monochrome green CRT monitor aesthetic. Perfect for feeling like you're browsing the web from a bunker in 1999.

7. Blueprint Mode (The Architect)

For those who appreciate structure. This applies a deep blue, inverted schematic look, making the site resemble an architectural blueprint.

8. Sepia Mode (The Time Traveler)

Dust off the archives. This gives everything a warm, aged parchment tone, perfect for reading through the D&D logs or imagining the site as an ancient manuscript.

How to Access Them

You can unlock these modes in two ways:

1. The Command Palette (For the Power User)

Press Alt + K (or click the "Commands" button in the sidebar) to open the Command Palette. Then, simply type:

  • Toggle Invert Colors
  • Toggle Retro Mode
  • Party Mode
  • Toggle Mirror Mode
  • Toggle Noir Mode
  • Toggle Terminal Mode
  • Toggle Blueprint Mode
  • Toggle Sepia Mode
  • ...or try Do a Barrel Roll for a quick spin!

2. The Settings Page (For the Clicker)

Head over to the Settings page (accessible from the Sidebar). Scroll down to the new Visual Effects section, where you'll find toggle switches for all persistent modes.

Under the Hood

Implementing these was a fun exercise in CSS filters and React context.

  • Persistence: We use a custom usePersistentState hook (wrapper around localStorage) to remember your choices, so your Retro Mode stays on even after you refresh.
  • CSS Magic: Most effects use backdrop-filter on a fixed pseudo-element (body::after). This was crucial to ensure that position: fixed elements (like the Sidebar) didn't break or scroll away when the filters were applied.
  • Global Context: A new VisualSettingsContext manages the state application-wide, ensuring that the Settings page and Command Palette stay in sync.

Go ahead, break the UI. It's a feature, not a bug.

Read more...

]]>
<![CDATA[Reducing React App Bundle Size: A Practical Guide]]> https://fezcode.com/blog/reducing-react-app-size https://fezcode.com/blog/reducing-react-app-size Thu, 27 Nov 2025 00:00:00 GMT Web performance is crucial for user experience. A slow-loading website can drive visitors away before they even see your content. Recently, I noticed that Fezcodex was taking a bit too long to load, so I decided to investigate and optimize the production build.

Here's how I managed to reduce the main bundle size by over 70%, shrinking main.js by approximately 590 kB.

The Diagnosis

When I ran the build command, I noticed the generated main.js file was quite large. In a standard Create React App (CRA) setup, the entire application is often bundled into a single JavaScript file. This means a user has to download every page and component just to see the homepage.

Strategy 1: Code Splitting with React.lazy and Suspense

The most effective way to reduce the initial bundle size is Code Splitting. Instead of loading the entire app at once, we split the code into smaller chunks that are loaded on demand.

React provides built-in support for this via React.lazy and Suspense.

Before:

All pages were imported statically at the top of the routing file:

import HomePage from '../pages/HomePage';
import BlogPage from '../pages/BlogPage';
import ProjectsPage from '../pages/ProjectsPage';
// ... diverse imports

After:

I refactored the imports to be lazy loaded:

import React, { lazy, Suspense } from 'react';
import Loading from './Loading'; // A simple spinner component

// Lazy Imports
const HomePage = lazy(() => import('../pages/HomePage'));
const BlogPage = lazy(() => import('../pages/BlogPage'));
const ProjectsPage = lazy(() => import('../pages/ProjectsPage'));
// ...

And wrapped the routes in Suspense:

function AnimatedRoutes() {
  return (
    <Suspense fallback={<Loading />}>
       {/* Routes ... */}
    </Suspense>
  );
}

This change ensures that the code for BlogPage is only downloaded when the user actually navigates to /blog.

How Does the Builder Know?

You might wonder: How does the build tool (Webpack, in this case) know to separate these files?

It all comes down to the dynamic import() syntax.

  1. The Trigger: Standard imports (e.g., import X from 'Y') are static; Webpack bundles them immediately. When Webpack encounters import('...'), it recognizes a split point.
  2. Chunk Generation: Webpack cuts that specific module (and its unique dependencies) out of the main bundle and creates a separate file, known as a chunk.
  3. The Glue: The main bundle retains a tiny instruction. It effectively says, "When the application needs this component, send a network request to fetch this specific chunk file."

React.lazy and Suspense simply manage the UI state (like showing the loading spinner) while that asynchronous network request is happening.

Strategy 2: Disabling Source Maps in Production

Source maps are incredibly useful for debugging, as they map the minified production code back to your original source code. However, they are also very large.

By default, Create React App generates source maps for production builds. While the browser only downloads them if you open the developer tools, they still occupy space on the server and can slow down deployment pipelines.

I disabled them in my craco.config.js (since I'm using CRACO to override CRA settings):

webpack: {
  configure: (webpackConfig, { env }) => {
    // Disable sourcemaps for production
    if (env === 'production') {
      webpackConfig.devtool = false;
    }
    return webpackConfig;
  },
},

The Results

The impact was immediate and significant.

  • Before: main.js was heavy, containing the entire application logic.
  • After: main.js reduced by ~590 kB.

Now, the initial load is snappy, and users only download what they need. If you're building a React app with many routes, I highly recommend implementing code splitting early on!

Read more...

]]>
<![CDATA[Mastering Tailwind CSS: The "Absolute Centering" Trick]]> https://fezcode.com/blog/mastering-tailwind-centering https://fezcode.com/blog/mastering-tailwind-centering Wed, 26 Nov 2025 00:00:00 GMT Have you ever tried to center a title in a header, but also wanted a "Back" button or a breadcrumb on the far left?

If you just use flex justify-between, the title gets pushed off-center if the left and right items aren't exactly the same width. It looks messy.

Today, I'm going to show you the "Magic" behind perfectly centering an element while keeping a side item positioned absolutely, using Tailwind CSS.

The Challenge

The goal is to have the Title perfectly centered in the container, regardless of how long the Breadcrumb text on the left is.

The Solution: Absolute Positioning within a Relative Container.

<div className="relative flex flex-col items-center justify-center mb-4">
  {/* Breadcrumb (Absolute on Desktop) */}
  <span className="md:absolute md:left-0 md:top-1/2 md:-translate-y-1/2 ...">
    fc::apps::tcg
  </span>

  {/* Title (Flow Content) */}
  <h1 className="...">Techno TCG Maker</h1>
</div>

Step-by-Step Breakdown

1. The Parent (relative)

<div className="relative flex flex-col items-center justify-center">
  • relative: This defines the "sandbox". Any child with absolute positioning will position itself relative to this box, not the whole page.
  • flex flex-col items-center: By default (mobile), this is just a vertical stack. The breadcrumb sits on top of the title.

2. The Breadcrumb (absolute)

<span className="md:absolute md:left-0 md:top-1/2 md:-translate-y-1/2">
  • md:absolute: On medium screens (desktop) and up, we rip this element out of the document flow. It no longer takes up space, so the Title (which is still in the flow) naturally snaps to the exact center of the parent.
  • md:left-0: "Go to the far left edge."
  • md:top-1/2: "Move your top edge to 50% of the container's height." (This alone actually makes it look too low).
  • md:-translate-y-1/2: "Slide yourself UP by 50% of your own height." This is the golden rule for vertically centering absolute items.

Bonus: Coding Tailwind Like a Pro

To write "clean" Tailwind that produces complex layouts like this, follow these mental models:

A. Think Mobile-First

Notice how I wrote flex-col first, and then md:absolute?

  • Bad: Write for desktop, then try to fix it for mobile.
  • Good: Write for a narrow phone screen. Once that looks good, add md: prefix to change the layout for tablets/laptops.

B. Master the "Invisible Box" (Flexbox)

90% of layout is just Flexbox.

  • flex justify-between: Items push to edges (Left ... Right).
  • flex justify-center: Items bunch in the middle.
  • gap-4: The best way to space items. Never use margin-right on children if you can use gap on the parent.

C. The "Text Gradient" Trick

To get that shiny, futuristic text effect:

  1. bg-gradient-to-r: Define the gradient direction.
  2. from-X to-Y: Define the colors.
  3. bg-clip-text text-transparent: The specific magic that clips the colored background to the shape of the letters and makes the text fill invisible so the background shows through.

D. Memorize the Spacing Scale

Tailwind's scale is usually multiples of 4px (0.25rem).

  • 1 = 4px
  • 4 = 16px (Standard padding/margin)
  • 8 = 32px
  • 16 = 64px

Sticking to this rhythm makes your UI feel consistent and "professional" without you really trying.

Read more...

]]>
<![CDATA[LeetCode 62: Unique Paths - A Dynamic Programming Approach]]> https://fezcode.com/blog/leetcode-62-unique-paths https://fezcode.com/blog/leetcode-62-unique-paths Mon, 24 Nov 2025 00:00:00 GMT LeetCode 62, "Unique Paths," is a classic problem that often serves as an excellent introduction to dynamic programming. It challenges us to find the number of unique paths a robot can take to reach the bottom-right corner of a m x n grid, starting from the top-left corner. The robot can only move either down or right at any point in time.

Problem Description

Imagine a robot positioned at the top-left cell (0,0) of a grid with m rows and n columns. The robot's goal is to reach the bottom-right cell (m-1, n-1). The only allowed moves are one step down or one step right. We need to calculate the total number of distinct paths the robot can take to reach its destination.

Let's visualize a simple 3 x 7 grid:

S . . . . . .
. . . . . . .
. . . . . . F

Where S is the start and F is the finish.

Dynamic Programming Approach

This problem has optimal substructure and overlapping subproblems, making it a perfect candidate for dynamic programming.

Consider a cell (i, j) in the grid. To reach this cell, the robot must have come either from the cell directly above it (i-1, j) by moving down, or from the cell directly to its left (i, j-1) by moving right.

Therefore, the number of unique paths to reach (i, j) is the sum of unique paths to reach (i-1, j) and unique paths to reach (i, j-1).

Let dp[i][j] represent the number of unique paths to reach cell (i, j). The recurrence relation is: dp[i][j] = dp[i-1][j] + dp[i][j-1]

Base Cases:

  • For any cell in the first row (i=0), there's only one way to reach it: by moving right repeatedly from (0,0). So, dp[0][j] = 1.
  • For any cell in the first column (j=0), there's only one way to reach it: by moving down repeatedly from (0,0). So, dp[i][0] = 1.
  • The starting cell (0,0) has dp[0][0] = 1 path (it's already there).

We can build a 2D array (or even optimize space to a 1D array) to store these path counts.

Go Solution

Here's an implementation of the dynamic programming approach in Go:

func uniquePaths(m int, n int) int {
    // Create a 2D DP array initialized with 1s for the base cases
    dp := make([][]int, m)
    for i := range dp {
        dp[i] = make([]int, n)
    }

    // Initialize the first row and first column with 1s
    // since there's only one way to reach any cell in the first row/column
    // (by only moving right or only moving down respectively).
    for i := 0; i < m; i++ {
        dp[i][0] = 1
    }
    for j := 0; j < n; j++ {
        dp[0][j] = 1
    }

    // Fill the DP table
    for i := 1; i < m; i++ {
        for j := 1; j < n; j++ {
            dp[i][j] = dp[i-1][j] + dp[i][j-1]
        }
    }

    // The result is the value at the bottom-right corner
    return dp[m-1][n-1]
}

Combinatorial Approach

Alternatively, this problem can be solved using a combinatorial approach. To reach the bottom-right corner of an m x n grid, the robot must make exactly m-1 'down' moves and n-1 'right' moves. The total number of moves will be (m-1) + (n-1).

The problem then reduces to finding the number of ways to arrange these m-1 down moves and n-1 right moves. This is a classic combinatorial problem: choosing m-1 positions for the 'down' moves (or n-1 positions for the 'right' moves) out of a total of (m-1) + (n-1) moves.

The formula for combinations is C(N, K) = N! / (K! * (N-K)!), where N is the total number of steps and K is the number of 'down' (or 'right') moves.

Go Solution (Combinatorial)

func uniquePathsCombinatorial(m int, n int) int {
    downMoves := m - 1
    rightMoves := n - 1
    totalSteps := downMoves + rightMoves

    // Choose the smaller of downMoves or rightMoves for k to minimize calculations
    k := downMoves
    if rightMoves < downMoves {
        k = rightMoves
    }

    var comb float64 = 1.0
    // Formula: C(N, K) = (N/1) * ((N-1)/2) * ... * ((N-k+1)/k)
    // This avoids large factorial calculations by performing multiplications and divisions iteratively.
    for i := 1; i <= k; i++ {
        comb = comb * float64(totalSteps - i + 1) / float64(i)
    }

    return int(comb)
}

Conclusion

The "Unique Paths" problem demonstrates the power of dynamic programming in breaking down a complex problem into simpler, overlapping subproblems. By carefully defining our state and recurrence relation, we can build up the solution efficiently. This particular problem also has a combinatorial solution using binomial coefficients, but the dynamic programming approach is often more intuitive for beginners to DP.

Read more...

]]>
<![CDATA[Gaussian Elimination: The Swiss Army Knife of Linear Systems in Computer Engineering]]> https://fezcode.com/blog/gaussian-elimination https://fezcode.com/blog/gaussian-elimination Sun, 23 Nov 2025 00:00:00 GMT When you hear "linear algebra," your mind might jump to complex math, but at its heart lies a powerful tool called Gaussian Elimination. Far from being just a theoretical concept, this method is a workhorse in various fields of computer engineering, helping us solve systems of linear equations efficiently. In simple terms, it's a systematic way to solve multiple equations with multiple unknowns.

What is Gaussian Elimination? (The Simple Explanation)

Imagine you have a few simple equations: Equation 1: x + y = 5 Equation 2: x - y = 1

You can probably solve this in your head or by simple substitution. Gaussian elimination provides a step-by-step, mechanical way to solve this, even when you have hundreds or thousands of equations and variables.

The core idea is to transform a system of equations into an "echelon form" using three basic operations:

  1. Swapping rows: Change the order of equations.
  2. Multiplying a row by a non-zero number: Scale an equation.
  3. Adding a multiple of one row to another row: Combine equations.

These operations don't change the solution of the system. By applying them strategically, you eliminate variables one by one until you have a very simple system that can be solved by "back-substitution" (solving the last equation first, then plugging its answer into the second-to-last, and so on).

How it Works (A Quick Visual)

Let's represent our equations in a matrix format (an "augmented matrix"):

[ 1  1 | 5 ]
[ 1 -1 | 1 ]

Step 1: Get a leading 1 in the first row, first column. (Already done here!)

Step 2: Make all entries below the leading 1 in the first column zero. Subtract Row 1 from Row 2: R2 = R2 - R1

[ 1  1 | 5 ]
[ 0 -2 | -4 ]

Step 3: Get a leading 1 in the second row, second column. Divide Row 2 by -2: R2 = R2 / -2

[ 1  1 | 5 ]
[ 0  1 | 2 ]

Now the matrix is in row echelon form! We can translate it back to equations: Equation 1: x + y = 5 Equation 2: y = 2

Step 4: Back-substitution. From Equation 2, we know y = 2. Substitute y=2 into Equation 1: x + 2 = 5 x = 3

So, x = 3 and y = 2. This systematic process is what makes Gaussian Elimination so powerful for computers.

Usages in Computer Engineering

Gaussian Elimination might seem like abstract math, but its ability to efficiently solve linear systems is fundamental to many computer engineering applications:

1. Computer Graphics

  • 3D Transformations: When you move, rotate, or scale objects in 3D space, you're performing linear transformations. Combining these transformations, especially finding inverse transformations, often boils down to solving linear systems.
  • Ray Tracing: Determining intersections between rays and complex 3D objects (like planes or curved surfaces) can involve solving systems of equations.
  • Lighting and Shading: Calculating how light interacts with surfaces (e.g., diffuse, specular components) can also lead to linear systems.

2. Machine Learning and Data Science

  • Linear Regression: Finding the "best fit" line or plane for data points is a classic problem that can be solved by setting up and solving a system of linear equations (normal equations).
  • Solving Optimization Problems: Many optimization algorithms (e.g., in deep learning) involve finding solutions to systems of equations to minimize error functions.

3. Robotics and Control Systems

  • Kinematics: Determining the position and orientation of robot parts based on joint angles (forward kinematics) or finding joint angles to reach a desired position (inverse kinematics) frequently involves solving linear systems.
  • Path Planning: Calculating trajectories for robots to move from one point to another while avoiding obstacles can be formulated using linear equations.

4. Circuit Analysis

  • Kirchhoff's Laws: In electrical engineering, applying Kirchhoff's voltage and current laws to a circuit often results in a system of linear equations that need to be solved to find unknown currents or voltages.

5. Network Flow Problems

  • Routing Algorithms: In computer networks, optimizing data flow, finding shortest paths, or allocating bandwidth can be modeled as systems of linear equations or inequalities, which are then solved using techniques related to Gaussian elimination.

Conclusion

Gaussian Elimination provides a robust and algorithmic approach to a problem that appears everywhere in computing: solving linear systems. From rendering realistic 3D graphics to teaching machines to learn, and from controlling robots to analyzing complex electrical circuits, this mathematical workhorse underpins a vast array of technologies we use every day. Its beauty lies in its simplicity and its profound impact on making complex computational problems tractable.

Read more...

]]>
<![CDATA[Fixing GRUB Syntax Errors Caused by Grub Customizer]]> https://fezcode.com/blog/fixing-grub https://fezcode.com/blog/fixing-grub Sat, 22 Nov 2025 00:00:00 GMT Fixing GRUB Syntax Errors Caused by Grub Customizer

You've updated your system, and suddenly you're greeted with a cryptic GRUB error message:

error: syntax error.
error: Incorrect command.
error: syntax error.
Syntax error at line 221
Syntax errors are detected in generated GRUB config file.
Ensure that there are no errors in /etc/default/grub
and /etc/grub.d/* files or please file a bug report with
/boot/grub/grub.cfg.new file attached.

This error can be frustrating, especially when you haven't manually edited any GRUB configuration files. This blog post will guide you through identifying the source of this problem and how to fix it.

The Source of the Problem: Grub Customizer

In many cases, the culprit behind these GRUB syntax errors is a tool called Grub Customizer. While it offers a graphical interface to manage your GRUB bootloader, it can sometimes cause problems, especially after system updates.

Grub Customizer works by replacing the standard GRUB configuration scripts in /etc/grub.d/ with its own "proxy" scripts. These proxy scripts then call a binary named grubcfg_proxy to apply the customizations. This can lead to a fragile configuration that breaks when other parts of the system are updated.

How to Detect the Source of the Problem

You can confirm if Grub Customizer is the cause of your issues by inspecting the /etc/grub.d/ directory. Open a terminal and run:

ls -l /etc/grub.d/

If you see files with _proxy in their names (e.g., 10_linux_proxy, 30_os-prober_proxy) and directories like backup, bin, and proxifiedScripts, it's a strong indication that Grub Customizer has modified your GRUB configuration.

You might also find a script like this in /etc/grub.d/10_linux_proxy:

#!/bin/sh
#THIS IS A GRUB PROXY SCRIPT
'/etc/grub.d/proxifiedScripts/linux' | /etc/grub.d/bin/grubcfg_proxy "-'SUBMENU' as 'Advanced options for Ubuntu'{-'Advanced options for Ubuntu'/*, -'Advanced options for Ubuntu'/'Ubuntu, with Linux 6.17.0-6-generic'~09ff0eeb66e30428b876bfc87b466e5d~, -'Advanced options for Ubuntu'/'Ubuntu, with Linux 6.17.0-6-generic (recovery mode)'~235ee17b753aaaca5703a4e27ecda63b~}
+*
+#text
-'Ubuntu'~5eca380a341c422accf5af1ff1704fc7~
"%

This non-standard script is a clear sign of Grub Customizer's intervention.

The Approach and Solution

The most reliable way to fix this issue is to completely remove Grub Customizer and restore your GRUB configuration to its default state. This will remove any customizations you've made with the tool, but it will give you a stable and working bootloader.

Here are the steps to follow:

1. Purge Grub Customizer

First, you need to completely remove the grub-customizer package and its configuration files. Run the following command:

sudo apt-get purge grub-customizer

2. Reinstall GRUB

Next, reinstall the GRUB package to ensure all the original scripts are restored in /etc/grub.d/.

sudo apt-get install --reinstall grub-pc

Note: This command is for systems using a traditional BIOS or CSM. If you are using UEFI, you might need to install grub-efi-amd64 or a similar package depending on your architecture.

3. Update GRUB

Finally, regenerate the grub.cfg file with the restored, standard scripts. This command will also run os-prober to detect other operating systems like Windows and add them to the boot menu.

sudo update-grub

After running these commands, your GRUB configuration should be back to a clean, working state, and the syntax errors should be gone.

Conclusion

Grub Customizer can be a convenient tool, but it can also lead to unexpected issues. If you encounter GRUB errors after using it, the best solution is often to remove it and revert to the standard GRUB configuration. By following the steps in this guide, you can quickly resolve these errors and get your system booting correctly again.

Read more...

]]>
<![CDATA[4 Equals For Complete Equalness]]> https://fezcode.com/blog/floating-point-precision-in-javascript https://fezcode.com/blog/floating-point-precision-in-javascript Fri, 21 Nov 2025 00:00:00 GMT Behold, A New Operator ====

About

When 0.1 + 0.2 in JavaScript yields 0.30000000000000004, it highlights a common aspect of computer arithmetic, not a bug. This occurs because JavaScript, like most languages, uses the IEEE 754 standard for floating-point numbers, which relies on binary (base-2) representation.

Decimal fractions like 0.1 and 0.2 cannot be perfectly represented as finite binary fractions; they become infinitely repeating. When these are stored in a finite number of bits, a tiny truncation error is introduced. This slight imprecision in each number accumulates during addition, resulting in a sum that's marginally off from the exact mathematical total.

Solutions

For scenarios requiring precise decimal arithmetic (e.g., financial applications), direct floating-point calculations can be problematic. Consider these approaches:

  1. Rounding: Use toFixed() to round results to a desired decimal precision. Remember to convert the string output back to a number if needed.
    parseFloat((0.1 + 0.2).toFixed(1)); // 0.3
    
  2. Integer Arithmetic: Scale numbers to integers before calculations and then scale the final result back down.
    (0.1 * 10 + 0.2 * 10) / 10; // 0.3
    
  3. Specialized Libraries: For advanced precision, utilize libraries like Big.js or Decimal.js.

This behavior is a fundamental consequence of binary representation in computing, not a flaw in JavaScript, and understanding it is key to handling numerical precision effectively.

Introducing the ==== Operator: For When === Just Isn't Enough

Sometimes, strict equality (===) feels like it's trying too hard to be precise, yet still falls short of our deepest desires for perfect, unyielding truth. For those moments, when you need to compare not just value and type, but also the very essence of existence, I propose the Quadruple Equals Operator (====)!

What does ==== do? Well, it's simple:

  • 0.1 + 0.2 ==== 0.3 would (theoretically) return true. Because in a world where ==== exists, numbers just know what they're supposed to be.
  • "hello" ==== "hello" would, naturally, be true.
  • [] ==== [] might still be false, because even ==== respects the existential uniqueness of array instances. But I am working on it. ¯\_(ツ)_/¯
  • The ==== operator is so powerful, it can detect deep existential equality, ensuring that not only values and types match, but also their historical context, their developer's intent, and their cosmic vibrational frequency.

Alas, ==== is a mere dream, a mythical beast in the JavaScript ecosystem, born from the frustration of floating-point arithmetic. For now, we'll have to stick to our practical solutions. But one can dream of a world where 0.1 + 0.2 ==== 0.3 just makes sense.

Read more...

]]>
<![CDATA[Kaprekar's Routine: A Curious Number Game]]> https://fezcode.com/blog/kaprekars-routine https://fezcode.com/blog/kaprekars-routine Tue, 18 Nov 2025 00:00:00 GMT Have you ever played with numbers and found a surprising pattern? One such fascinating pattern is hidden within Kaprekar's Routine, named after the Indian mathematician D.R. Kaprekar. It's a simple game that, for most four-digit numbers, always leads to the same result: 6174.

Let's dive in and see how it works!

The Rules of the Game

  1. Pick a four-digit number with at least two different digits. (Numbers like 1111, 2222, etc., won't work).
  2. Arrange the digits to form the largest possible number.
  3. Arrange the same digits to form the smallest possible number.
  4. Subtract the smallest number from the largest number.
  5. Repeat steps 2-4 with the new number you get.

You'll be amazed at what happens!

Example 1: Starting with 3524

Let's try with the number 3524:

  • Step 1: Our number is 3524. It has different digits.
  • Step 2: Largest number: 5432
  • Step 3: Smallest number: 2345
  • Step 4: Subtract: 5432 - 2345 = 3087

Now, we repeat the process with 3087:

  • Step 2: Largest number: 8730
  • Step 3: Smallest number: 0378 (remember to include leading zeros to keep it a four-digit number)
  • Step 4: Subtract: 8730 - 0378 = 8352

Repeat with 8352:

  • Step 2: Largest number: 8532
  • Step 3: Smallest number: 2358
  • Step 4: Subtract: 8532 - 2358 = 6174

And there it is! We reached 6174.

Example 2: Starting with 1987

Let's try another one with 1987:

  • Step 1: Our number is 1987.
  • Step 2: Largest number: 9871
  • Step 3: Smallest number: 1789
  • Step 4: Subtract: 9871 - 1789 = 8082

Repeat with 8082:

  • Step 2: Largest number: 8820
  • Step 3: Smallest number: 0288
  • Step 4: Subtract: 8820 - 0288 = 8532

Repeat with 8532:

  • Step 2: Largest number: 8532
  • Step 3: Smallest number: 2358
  • Step 4: Subtract: 8532 - 2358 = 6174

Again, we arrived at 6174!

The Magic of 6174

This number, 6174, is known as Kaprekar's Constant. For almost any four-digit number (with at least two different digits), if you keep applying Kaprekar's routine, you will eventually reach 6174. Once you reach 6174, the next step will always be:

  • Largest: 7641
  • Smallest: 1467
  • Subtract: 7641 - 1467 = 6174

It's a loop!

Kaprekar's routine is a wonderful example of how simple arithmetic operations can lead to unexpected and beautiful mathematical constants. Try it with your own four-digit numbers and see the magic unfold!

Read more...

]]>
<![CDATA[The Inevitable Dance of Entropy: A Rant on Chaos]]> https://fezcode.com/blog/chaos-theory-philosophical-rant https://fezcode.com/blog/chaos-theory-philosophical-rant Tue, 18 Nov 2025 00:00:00 GMT Oh, to be human! To crave order, to meticulously plan, to believe that if we just gather enough data, analyze enough variables, we can predict the future. What a glorious, self-deceiving delusion. Because lurking beneath our carefully constructed narratives of cause and effect, there's a mischievous, undeniable truth: Chaos Theory.

And no, I'm not talking about some dry, academic treatise on differential equations. I'm talking about the philosophy of chaos, the infuriating, liberating realization that the universe, and our lives within it, are fundamentally, gloriously, and terrifyingly unpredictable.

We cling to the idea that every grand outcome must have an equally grand progenitor. A monumental decision leads to a monumental consequence. But Chaos Theory, in its most poetic form, whispers (or rather, shouts) about the "butterfly effect." It's the notion, famously articulated by meteorologist Edward Lorenz, that a butterfly flapping its wings in Brazil could, theoretically, set off a tornado in Texas. Think about that for a second. A tiny, almost imperceptible flutter, a mere breath of air, cascading through an infinitely complex system to reshape continents.

How many times have you looked back at a pivotal moment in your life and traced its origin not to a grand choice, but to a forgotten email, a chance encounter, a delayed train, or a spilled cup of coffee? That job you landed? Maybe it wasn't your stellar resume, but the fact that the hiring manager had a particularly good morning because their cat didn't wake them up at 4 AM for once. That relationship that changed everything? Perhaps it began because you took a different route home, avoiding a puddle that would have otherwise sent you down a completely different path.

We build our models, our algorithms, our five-year plans, convinced that if we just perfect the inputs, the outputs will be ours to command. But chaos laughs. It reminds us that even the most minute, unmeasurable perturbation can send the entire system veering off into an entirely new, unforeseen trajectory. It's why weather forecasts beyond a few days are notoriously unreliable, despite supercomputers churning through quadrillions of calculations. It's why economies crash when a seemingly minor market fluctuation triggers a cascade of panic.

And this, my friends, is where the "rant" truly begins. Because while our rational minds scream for control, for certainty, for a predictable narrative, chaos offers none. It offers a beautiful, maddening dance where every step influences the next in ways we can never fully grasp. It's the ultimate cosmic prank, reminding us of our infinitesimal place in a universe that cares not for our spreadsheets or our anxieties.

So, what's the point? To despair? To throw our hands up and surrender to the whims of the universe? Perhaps. Or perhaps, it's to find a strange, unsettling peace in the surrender. To embrace the fact that life is less a meticulously crafted blueprint and more a jazz improvisation – full of unexpected notes, beautiful accidents, and moments of pure, unadulterated, glorious chaos.

Stop trying to control the wind; learn to sail. Stop trying to predict the butterfly; just marvel at its flight. Because in the heart of that unpredictability lies the very essence of life's adventure. And maybe, just maybe, that's a rant worth having.

Read more...

]]>
<![CDATA[Minimum Number of Steps to Make Two Strings Anagram]]> https://fezcode.com/blog/minimum-number-of-steps-to-make-two-strings-anagram https://fezcode.com/blog/minimum-number-of-steps-to-make-two-strings-anagram Mon, 17 Nov 2025 00:00:00 GMT LeetCode 1347: Minimum Number of Steps to Make Two Strings Anagram

Problem Description

Given two strings s and t of the same length, you want to change t in the minimum number of steps such that it becomes an anagram of s. A step consists of replacing one character in t with another character.

An Anagram is a word or phrase formed by rearranging the letters of a different word or phrase, typically using all the original letters exactly once. For example, "anagram" and "nagaram" are anagrams.

Both strings consist of lowercase English letters.

Example 1: Input: s = "bab", t = "aba" Output: 1 Explanation: Replace the first 'a' in t with b, t = "bba" which is an anagram of s.

Example 2: Input: s = "leetcode", t = "practice" Output: 5 Explanation: Replace 'p', 'r', 'a', 'i', 'c' in t with 'l', 'e', 'e', 't', 'd' to form an anagram of s.

Example 3: Input: s = "anagram", t = "mangaar" Output: 0 Explanation: "anagram" is already an anagram of "mangaar".

Solution in Go

The core idea to solve this problem is to count the frequency of each character in both strings s and t. Since we want to transform t into an anagram of s by replacing characters in t, we need to identify characters in t that are "excess" compared to what s needs.

For each character from 'a' to 'z':

  1. Count its occurrences in s.
  2. Count its occurrences in t.
  3. If the count of a character in t is greater than its count in s, it means t has t_count - s_count extra occurrences of this character. These extra occurrences must be replaced to match the character distribution of s.
  4. The sum of these differences for all characters will give us the minimum number of steps.

This approach works because we only care about the characters that are overrepresented in t. Any characters that are underrepresented in t (i.e., t_count < s_count) will be formed by replacing the overrepresented characters. The total number of replacements needed is exactly the sum of the excesses.

package main

import "fmt"

func minSteps(s string, t string) int {
	sFreq := make([]int, 26) // Frequency array for string s
	tFreq := make([]int, 26) // Frequency array for string t

	// Populate frequency array for string s
	for _, char := range s {
		sFreq[char-'a']++
	}

	// Populate frequency array for string t
	for _, char := range t {
		tFreq[char-'a']++
	}

	steps := 0
	// Compare frequencies and calculate steps
	for i := 0; i < 26; i++ {
		// If character 'i' appears more times in t than in s,
		// these are the characters that need to be changed.
		if tFreq[i] > sFreq[i] {
			steps += tFreq[i] - sFreq[i]
		}
	}

	return steps
}

func main() {
	// Test cases
	fmt.Println(minSteps("bab", "aba"))    // Expected: 1
	fmt.Println(minSteps("leetcode", "practice")) // Expected: 5
	fmt.Println(minSteps("anagram", "mangaar")) // Expected: 0
	fmt.Println(minSteps("xxyyzz", "xxyyzz")) // Expected: 0
	fmt.Println(minSteps("friend", "family")) // Expected: 4
}

Hashmap Solution

package main

import (
	"fmt"
)

func minSteps(s string, t string) int {
	m := map[string]int{}
	for i := 0; i < len(s); i++ {
		m[string(s[i])]++
	}
	for i := 0; i < len(t); i++ {
		m[string(t[i])]--
	}
	steps := 0
	for _, v := range m {
		steps += abs(v)
	}
	return steps / 2

}

func abs(x int) int {
	if x < 0 {
		return -x
	}
	return x
}

func main() {
	fmt.Println(minSteps("bab", "aba"))
	fmt.Println(minSteps("leetcode", "practice"))
	fmt.Println(minSteps("anagram", "mangaar"))
	fmt.Println(minSteps("xxyyzz", "xxyyzz"))
	fmt.Println(minSteps("friend", "family"))
}

Read more...

]]>
<![CDATA[Decoding the Digital Alphabet: A Comprehensive Guide to BaseXX Encodings]]> https://fezcode.com/blog/decoding-the-digital-alphabet-base-xx-encodings https://fezcode.com/blog/decoding-the-digital-alphabet-base-xx-encodings Sun, 16 Nov 2025 00:00:00 GMT Introduction

In the digital realm, data often needs to be transformed for various purposes, such as safe transmission over different mediums, storage, or simply to make it more human-readable. This is where "BaseXX" encodings come into play. These methods convert binary data into a textual representation using a specific set of characters, known as an alphabet. While Base64 is perhaps the most widely known, a diverse family of BaseXX encodings exists, each with its unique characteristics and ideal use cases. This post will explore Base32, Base58, Base62, Base64, and Base85, comparing their features and shedding light on why you might choose one over another.

Understanding the Basics: How BaseXX Encodings Work

At its core, BaseXX encoding involves representing binary data (sequences of bits) as a string of characters from a predefined alphabet. The "XX" in BaseXX refers to the size of this alphabet. For example, Base64 uses an alphabet of 64 characters. The larger the alphabet, the more efficiently data can be represented (i.e., fewer characters are needed to encode the same amount of binary data), but it might come at the cost of readability or URL-safety.

The BaseXX Family: A Closer Look

Base32: The Human-Friendly Choice

  • Character Set: Uses 32 characters, typically uppercase letters A-Z and digits 2-7. Digits 0, 1, and 8 are often excluded to avoid confusion with letters O, I, and B, enhancing human readability and reducing transcription errors.
  • Encoding Scheme: Encodes 5 bytes of binary data (40 bits) into 8 printable characters, with each character representing 5 bits.
  • Efficiency: Less efficient than Base64 or Base85, increasing data size by approximately 60%.
  • Human Readability: Designed for case-insensitivity and manual entry, making it suitable for environments where human interaction with the encoded string is common.
  • Use Cases: Product activation codes, DNSSEC, QR codes, and situations requiring case-insensitive identifiers.

Base58: Cryptocurrency's Foundation

  • Character Set: A 58-character alphanumeric alphabet that intentionally excludes visually ambiguous characters like 0 (zero), O (uppercase O), I (uppercase I), and l (lowercase L). It also omits '+' and '/' found in Base64.
  • Encoding Scheme: Works by treating binary data as a large integer and converting it to a base-58 representation.
  • Efficiency: Less compact than Base64 or Base85, requiring about 25% more characters than Base64 for the same data.
  • Human Readability: Highly optimized for human readability and transcription, significantly minimizing errors due to similar-looking characters.
  • Use Cases: Widely used for cryptocurrency addresses (e.g., Bitcoin, Ethereum) and other identifiers in decentralized systems where error-resistant, human-friendly representation is crucial.

Base62: The Compact URL-Safe Option

  • Character Set: Uses 62 alphanumeric characters (a-z, A-Z, 0-9).
  • Encoding Scheme: Similar to other BaseXX methods, it converts binary data into a string using its 62-character alphabet.
  • Efficiency: More compact than Base32 and Base58.
  • Human Readability: Generally good, as it only uses alphanumeric characters.
  • Use Cases: Ideal for short URLs, unique ID generation, and any scenario where a compact, URL-safe, and human-readable string is desired without padding.

Base64: The Ubiquitous Standard

  • Character Set: Uses 64 characters: uppercase letters (A-Z), lowercase letters (a-z), digits (0-9), and two symbols, typically '+' and '/'. An optional padding character '=' is used to ensure encoded output is a multiple of 4 characters.
  • Encoding Scheme: Encodes 3 bytes of binary data (24 bits) into 4 characters, with each character representing 6 bits.
  • Efficiency: More efficient than Base32 and Base58, increasing data size by approximately 33%.
  • Human Readability: Less human-readable than Base32, Base58, or Base62 due to the inclusion of symbols and padding.
  • Use Cases: Encoding binary data in text-based formats like email (MIME), web APIs (JSON, XML), and embedding images directly into HTML or CSS. URL-safe variants (e.g., replacing '+' with '-' and '/' with '_') are often used for web applications.

Base85 (Ascii85): The Efficiency Champion

  • Character Set: Employs 85 printable ASCII characters, often ranging from '!' to 'u'.
  • Encoding Scheme: Encodes groups of 4 bytes of binary data (32 bits) into 5 ASCII characters. A special shortcut 'z' can represent four null bytes.
  • Efficiency: The most efficient of these encodings, offering superior data density. It increases data size by only 25% (5 characters for 4 bytes).
  • Human Readability: The least human-readable due to its wider range of punctuation characters, which can be problematic in some contexts.
  • Use Cases: Commonly found in Adobe's PostScript and PDF file formats, and used by Git for encoding binary patches, where compactness is prioritized over human readability.

Comparison Summary

Feature Base32 Base58 Base62 Base64 Base85 (Ascii85)
Character Set 32 (A-Z, 2-7) 58 (alphanumeric, excludes 0, O, I, l) 62 (a-z, A-Z, 0-9) 64 (A-Z, a-z, 0-9, +, /) 85 (printable ASCII '!' to 'u')
Encoding Ratio 5 bytes to 8 chars Variable Variable 3 bytes to 4 chars 4 bytes to 5 chars
Efficiency ~60% overhead ~25% more than Base64 Good ~33% overhead ~25% overhead (most efficient)
Human Readability Good (case-insensitive, limited set) Excellent (avoids ambiguous chars) Good (alphanumeric only) Moderate (includes symbols, padding) Poor (many punctuation chars)
URL-Safe Yes Yes Yes No (requires variants for web) No
Padding Yes (typically '=') No No Yes (typically '=') No (can use 'z' for null bytes)
Key Use Cases DNSSEC, QR codes, human-typed keys Cryptocurrency addresses, short URLs Short URLs, unique IDs Email (MIME), web APIs, embedding data PDF, PostScript, Git binary patches

Conclusion

The choice of BaseXX encoding depends heavily on the specific requirements of your application. If human readability and error reduction during manual transcription are paramount, Base32 or Base58 might be your best bet. For compact, URL-safe identifiers, Base62 offers a compelling solution. Base64 remains the workhorse for general binary-to-text encoding in web and email contexts, while Base85 shines when maximum data density is the primary concern, even at the expense of human readability. Understanding these distinctions allows developers to select the most appropriate encoding method for their particular needs, optimizing for efficiency, safety, and usability.

Read more...

]]>
<![CDATA[Fezcodex Stories with `git subtree`]]> https://fezcode.com/blog/fezcodex-stories-with-git-subtrees https://fezcode.com/blog/fezcodex-stories-with-git-subtrees Fri, 14 Nov 2025 00:00:00 GMT Fezcodex Stories with git subtree

Let's cover how we integrate fezcodex.stories repo to store and show our stories (dnd) section.

Integrating External Content Seamlessly with Git Subtree: A Practical Guide

In modern web development, it's common to need to incorporate content or even entire sub-projects from external Git repositories into your main project. Whether it's a shared library, documentation, or, as in our case, a collection of stories or blog posts, managing this external content efficiently is key. Git offers a couple of powerful tools for this: git submodule and git subtree.

While git submodule is excellent for managing distinct project dependencies, git subtree often shines when you want to integrate external content directly into your repository as if it were always part of it, especially when you need to easily pull updates. Let's dive into how git subtree can help you manage external content like your fezcodex.stories within your public/stories directory.

Why Choose Git Subtree?

When deciding between git submodule and git subtree, consider these advantages of git subtree for content integration:

  • Integrated History: The content of the external repository becomes a part of your main repository's history. This means anyone cloning your main repository gets all the content directly, without needing extra steps.
  • Simpler Cloning: A regular git clone of your main repository will fetch all the subtree content. No special commands like git submodule update --init --recursive are required for collaborators.
  • Easy Updates: Keeping your integrated content up-to-date with the original source is straightforward with a single git subtree pull command.
  • No .gitmodules: git subtree doesn't introduce additional configuration files like .gitmodules, keeping your repository root cleaner.
  • Works with Existing Tools: Since the content is fully integrated, all your existing Git tools and workflows (like git grep, git log) work seamlessly across your entire project, including the subtree content.

Setting Up Your Git Subtree: Step-by-Step

Let's walk through the process of adding the fezcodex.stories repository into your public/stories directory.

Prerequisites:

Before you begin, ensure your working directory is clean. Git commands like git subtree add prefer a state where there are no uncommitted changes to prevent conflicts.

  • Check your status: Run git status to see if you have any pending changes.
  • Commit or Stash: If you have modifications, either commit them (git add . && git commit -m "WIP: Prepare for subtree addition") or temporarily stash them (git stash).

Step 1: Add the External Repository as a Remote

First, we'll add the external repository as a remote to your current Git project. This gives it a short, memorable name that you can use to reference it later.

Important Note for Collaborators: Since Git does not track remotes in the repository itself, every time you clone this project fresh, you must re-run this step to enable syncing. In this project, we've simplified this with a command: npm run init-stories.

Explanation: This command tells your local Git repository about the existence of the fezcodex.stories repository and associates it with the name fezcodex-stories. This makes it easier to fetch from or push to this external repository without typing out the full URL every time.

Command:

git remote add fezcodex-stories https://github.com/fezcode/fezcodex.stories

Step 2: Add the Remote as a Subtree

Now, we'll integrate the content from the fezcodex-stories remote into a specific directory within your project (public/stories).

Explanation:

  • git subtree add: This is the core command to add a subtree.
  • --prefix public/stories: This specifies the local directory within your main project where the content from the external repository will reside. Git will create this directory if it doesn't exist.
  • fezcodex-stories: This is the name of the remote you defined in Step 1.
  • main: This indicates the branch from the fezcodex-stories remote that you want to pull. Important: Double-check the default branch name of the external repository (it might be master instead of main).
  • --squash: This option is highly recommended. It squashes all the commits from the external repository's history into a single commit when adding it to your main repository. This keeps your main project's commit history cleaner, preventing it from being flooded with potentially hundreds of commits from the external source.

Command:

git subtree add --prefix public/stories fezcodex-stories main --squash

Managing Your Git Subtree

Once your subtree is set up, here's how you'll typically interact with it.

Pulling Updates from the Subtree Source

The primary reason for using git subtree for content is to easily keep it updated. When the original fezcodex.stories repository has new content, you can pull those changes into your project.

Explanation: This command is very similar to the add command, but pull fetches the latest changes from the specified remote and branch, and then merges them into your local subtree directory. The --squash option again helps to keep your history tidy by squashing the incoming changes into a single merge commit.

Command:

git subtree pull --prefix public/stories fezcodex-stories main --squash

Making Changes within the Subtree and Pushing Back (Optional)

Sometimes, you might make modifications to the files within your public/stories directory (the subtree content) and wish to contribute those changes back to the original fezcodex.stories repository.

Explanation:

  • First, commit your changes in your main repository as you normally would.
  • Then, use git subtree push. This command takes the commits related to your public/stories directory and pushes them to the main branch of the fezcodex-stories remote.
  • Important: You must have push access to the original https://github.com/fezcode/fezcodex.stories repository for this to work. If you don't, you'd typically fork the original repository, push to your fork, and then open a pull request.

Command:

git subtree push --prefix public/stories fezcodex-stories main

Removing a Git Subtree (If Needed)

If you ever need to remove the subtree, it's a multi-step process:

Explanation:

  1. git rm -r public/stories: This removes the directory and its contents from your working tree and stages the deletion.
  2. git commit -m "Remove subtree public/stories": Commits the removal.
  3. git remote rm fezcodex-stories: Removes the remote reference you added earlier.
  4. You might also want to clean up any leftover Git configuration related to the subtree, though git remote rm handles the main part.

Commands:

git rm -r public/stories
git commit -m "Remove subtree public/stories"
git remote rm fezcodex-stories

Conclusion

git subtree provides a robust and integrated way to manage external content within your main Git repository. It simplifies collaboration by making external content directly available upon cloning and streamlines the update process. By following these steps, you can effectively incorporate and maintain your fezcodex.stories content, or any other external project, within your public/stories directory.

Read more...

]]>
<![CDATA[Publishing to NPM]]> https://fezcode.com/blog/publish-to-npm https://fezcode.com/blog/publish-to-npm Thu, 13 Nov 2025 00:00:00 GMT How to Publish a Node.js Library to npm

This document outlines the steps taken to publish the piml.js library to the npm registry.

1. Initial Setup and Conversion

  • Creating a piml.js file to house the JavaScript library.
  • Creating a piml.test.js file to test the JavaScript library.

2. Setting up the Node.js Project

To prepare the project for npm, the following steps were taken:

  • package.json: A package.json file was created to manage the project's metadata and dependencies. It was populated with the following information:

    • name: The name of the package on npm (e.g., "piml").
    • version: The initial version of the package (e.g., "1.0.0").
    • description: A brief description of the package.
    • main: The entry point of the package (e.g., "piml.js").
    • scripts: A "test" script to run the tests using Jest.
    • keywords: Keywords to help users find the package on npm.
    • author: The author of the package.
    • license: The license of the package (e.g., "MIT").
    • devDependencies: The development dependencies, such as jest.
  • .gitignore: A .gitignore file was created to prevent unnecessary files from being committed to the repository, such as node_modules, logs, and system files.

  • Dependencies Installation: The development dependencies were installed by running npm install.

3. Testing

With the project set up, the tests were run to ensure the library was working correctly:

npm test

Any failing tests were debugged and fixed until all tests passed.

4. Publishing to npm

Once the library was tested and ready, the following steps were taken to publish it to npm:

  1. Create an npm Account: An npm account is required to publish packages. You can create one at https://www.npmjs.com/signup.

  2. Log in to npm: From the command line, you need to log in to your npm account:

    npm login
    

    You will be prompted to enter your npm username, password, and email address.

  3. Check Package Name Availability: Before publishing, it's a good practice to check if the desired package name is available. This can be done by running:

    npm view <package-name>
    

    If the package exists, you will see information about it. If it doesn't, you will get a 404 error, which means the name is available.

  4. Publish the Package: To publish the package, run the following command from the project's root directory:

    npm publish
    

    If the package name is scoped (e.g., @username/package-name), you need to use the --access public flag:

    npm publish --access public
    
  5. Verify the Package: After publishing, you can verify that the package is available on npm by visiting https://www.npmjs.com/package/<your-package-name>.

By following these steps, the piml.js library was successfully published to the npm registry.

Read more...

]]>
<![CDATA[Parenthesis Intended Markup Language]]> https://fezcode.com/blog/piml https://fezcode.com/blog/piml Wed, 12 Nov 2025 00:00:00 GMT piml

Spec version: v1.1.0

Available Libraries

JSON<->PIML Converter

Parenthesis Intended Markup Language

In the ever-evolving landscape of data serialization formats, PIML (Parenthesis Intended Markup Language) emerges as a compelling alternative, prioritizing human readability and writability without compromising machine parseability. This post delves into the core tenets of PIML, exploring its syntax, data types, and how it stacks up against established formats like JSON, YAML, and TOML.

What is PIML?

PIML is a data serialization format designed for clarity and ease of use by both humans and machines. It leverages a unique (key) syntax and indentation-based nesting to create a visually intuitive representation of structured data. Conceived as a middle ground between the verbosity of JSON and the potential ambiguity of YAML, PIML aims to offer a clean, unambiguous, and highly readable format for various data exchange and configuration needs.

Syntax Rules: The Building Blocks of PIML

PIML's syntax is intentionally minimal, focusing on consistency and clarity.

Keys

Keys are the identifiers for data elements and are always enclosed in parentheses. This explicit demarcation makes keys instantly recognizable.

(my_key) my_value
(another key with spaces) another_value

Indentation

Indentation is fundamental to PIML's structure, defining hierarchical relationships between data elements.

  • Recommendation: Use 2 spaces for each level of indentation to maintain visual consistency.
  • Strict Rule: Mixing tabs and spaces for indentation is prohibited to prevent parsing ambiguities.

Comments

PIML supports single-line comments using the # symbol. Anything from # to the end of the line is ignored by parsers, allowing for clear inline documentation.

  • Rule: Only lines that start with # are treated as comments. Inline comments (e.g., (key) value # comment) are not supported and will be considered part of the value.
# This explains the data
(data) value # This entire line is the value, not a comment

Escaping

The backslash (\) character is used to escape special characters within string values, ensuring that characters like ( or # can be part of the data itself.

  • Common escapes include \n (newline), \t (tab), and \\ (literal backslash).
  • Example: (title) My \(Awesome\) Title
  • To include a # character at the beginning of a line within a multi-line string, escape it with a backslash (\), e.g., \# This is not a comment.

Data Types: Representing Information in PIML

PIML supports a range of data types, from simple primitives to complex nested structures.

Primitive Types

  • Single-line Strings: Unquoted text values that reside on a single line.
    (name) John Doe
    
  • Integers: Whole numbers.
    (age) 30
    
  • Floats: Decimal numbers.
    (price) 19.99
    
  • Booleans: Logical values, represented as true or false.
    (is_active) true
    

Null and Empty Representations: The nil Unifier

One of PIML's distinctive features is the use of nil as a unified representation for three distinct states:

  1. Null: The absence of a value.
  2. Empty Array: An empty list.
  3. Empty Object: An empty map.
(optional_setting) nil
(empty_items) nil
(empty_config) nil

This design choice prioritizes syntactic simplicity, though it means the specific type of an empty collection (array vs. object) is not preserved when nil is used.

Multi-line Strings

For text spanning multiple lines, PIML allows the content to start on the next indented line, preserving newlines.

(description)
    This is a multi-line string example.
    It can hold extensive textual content.

Arrays (Lists)

Ordered collections of items are denoted by a > prefix on each indented item.

(fruits)
    > apple
    > banana
    > orange

Sets (Unique, Unordered Collections)

PIML introduces >| for sets, which are collections of unique, unordered items. Duplicate values are ignored by parsers.

(unique_ids)
    >| id_a
    >| id_b
    >| id_a # This duplicate will be ignored

Objects (Maps)

Unordered key-value pairs are defined through indentation, creating nested structures.

(user)
    (name) Alice
    (email) alice@example.com

List of Objects

PIML provides a clear way to represent lists of objects, combining the array > marker with nested object syntax. The key within the object (e.g., (contributor)) serves as metadata for readability and is ignored by parsers.

(contributors)
    > (contributor)
        (id) 1
        (name) Bob
    > (contributor)
        (id) 2
        (name) Carol

Specialized Types (by Convention)

PIML encourages conventions for specialized data:

  • Dates/Times: Typically represented as strings in ISO 8601 format (e.g., 2023-10-27T10:30:00Z).
  • Binary Data: Usually encoded as base64 strings.

PIML in Action: A Comprehensive Example

Let's look at a more complete example demonstrating various PIML features:

(document_metadata)
    (title) PIML Specification Document
    (version) 1.0.0
    (author) Fezcodex
    (creation_date) 2025-11-12T10:00:00Z
    (is_draft) true
    (tags)
        > data-format
        > serialization
        > piml
    (abstract)
        This document outlines the PIML format,
        its syntax, and its design philosophy.
        It aims for human-centric data representation.
    (contact)
        (email) contact@example.com
        (website) [https://fezcode.github.io](https://fezcode.github.io)
    (empty_settings) nil

(configuration)
    (database)
        (type) SQLite
        (path) /data/app.db
        (max_connections) 50
    (api_keys)
        >| key_abc
        >| key_xyz
    (feature_toggles)
        (new_ui) true
        (beta_analytics) false

PIML vs. The World: A Comparison

PIML carves its niche by offering a distinct balance of features compared to other popular data formats.

PIML vs. JSON

  • PIML Advantages:
    • Readability: Significantly less visual noise due to the absence of quotes for keys and single-line strings, and no mandatory commas.
    • Comments: Native support for # comments, a feature lacking in JSON.
    • Multi-line Strings: More natural and cleaner syntax for multi-line text.
  • PIML Trade-off:
    • nil Ambiguity: The unified nil for null, empty arrays, and empty objects simplifies syntax but means PIML cannot perfectly distinguish between these empty collection types, potentially affecting round-trip conversions from JSON.

PIML vs. YAML

  • PIML Advantages:
    • Simplicity & Predictability: PIML's syntax is much smaller and more constrained, avoiding the "many ways to do one thing" complexity and the subtle whitespace pitfalls often associated with YAML. The explicit (key) syntax reduces ambiguity.
  • PIML Trade-off:
    • Feature Set: PIML deliberately omits advanced YAML features like anchors, aliases, and complex document structures, focusing on a simpler, more direct representation.

PIML vs. TOML

  • PIML Advantages:
    • Nesting Flexibility: PIML's indentation-based nesting is more adaptable for arbitrarily deep and complex data structures, contrasting with TOML's more rigid [table] and [[array of tables]] approach.
  • PIML Trade-off:
    • Configuration Focus: TOML excels as a configuration file format due to its flat, key-value pair nature. PIML is more general-purpose, though it can certainly be used for configuration.

Conclusion

PIML offers a refreshing perspective on data serialization, emphasizing human-centric design while maintaining machine parseability. Its explicit key syntax, indentation-driven structure, and thoughtful approach to data types make it a strong contender for scenarios where clarity, readability, and ease of writing are paramount. As data continues to grow in complexity, formats like PIML provide valuable alternatives for developers seeking more intuitive ways to manage and exchange information.

Read more...

]]>
<![CDATA[Image Toolkit Deep Dive]]> https://fezcode.com/blog/image-toolkit-deep-dive https://fezcode.com/blog/image-toolkit-deep-dive Mon, 10 Nov 2025 00:00:00 GMT Image Toolkit Deep Dive

In this blog post, we'll take a deep dive into the implementation of the Image Toolkit app. We'll explore the various image filters and their algorithms, and we'll also discuss a common React Hook-related warning and how to fix it.

You can try it here apps::itk

The Filters

The Image Toolkit app provides a variety of filters that you can apply to your images. Let's take a look at each one and the algorithm behind it.

Monochrome

The monochrome filter converts an image to grayscale. The algorithm for this is quite simple. For each pixel in the image, we calculate the average of the red, green, and blue values. Then, we set the red, green, and blue values of the pixel to this average value.

const avg = (data[i] + data[i + 1] + data[i + 2]) / 3;
data[i] = avg; // red
data[i + 1] = avg; // green
data[i + 2] = avg; // blue

Blur

The blur filter applies a blur effect to the image. We use the stackblur-canvas library to achieve this effect. The canvasRGBA function from this library takes the canvas, the coordinates of the area to blur, and the blur radius as input.

canvasRGBA(canvas, 0, 0, canvas.width, canvas.height, blurAmount);

Dithering

Dithering is a technique used to create the illusion of more colors than are actually available. We use the Bayer dithering algorithm. This algorithm uses a threshold map (the Bayer matrix) to determine whether a pixel should be black or white.

const bayerMatrix = [
  [1, 9, 3, 11],
  [13, 5, 15, 7],
  [4, 12, 2, 10],
  [16, 8, 14, 6]
];
const threshold = bayerMatrix[y % matrixSize][x % matrixSize] * 16;
const newValue = gray < threshold ? 0 : 255;

Cel Shading

Cel shading is a non-photorealistic rendering technique that makes 3D computer graphics appear to be flat. To achieve this effect, we first apply color quantization to reduce the number of colors in the image. Then, we use the Sobel operator to detect the edges in the image. Finally, we combine the quantized image and the edges to create the cel-shaded effect.

Halftone

The halftone filter simulates the effect of printing an image with a series of dots. We first convert the image to grayscale. Then, for each grid of pixels, we calculate the average brightness and draw a circle with a radius proportional to the brightness.

Solarization

Solarization is an effect where the image is partially reversed. We set a threshold and for each pixel, if the color component is less than the threshold, we invert it.

if (r < threshold) data[i] = 255 - r;
if (g < threshold) data[i + 1] = 255 - g;
if (b < threshold) data[i + 2] = 255 - b;

Posterization

Posterization is a process in which the number of colors in an image is reduced. For each color component of a pixel, we round it to the nearest value in a smaller set of values.

Sepia

The sepia filter gives the image a warm, brownish tone. We use a set of coefficients to calculate the new red, green, and blue values for each pixel.

data[i] = Math.min(255, (r * 0.393) + (g * 0.769) + (b * 0.189));
data[i + 1] = Math.min(255, (r * 0.349) + (g * 0.686) + (b * 0.168));
data[i + 2] = Math.min(255, (r * 0.272) + (g * 0.534) + (b * 0.131));

Pixelization

The pixelization filter creates a blocky, pixelated effect. We divide the image into a grid of blocks and fill each block with the color of the top-left pixel in that block.

Duotone

The duotone filter uses two colors to create a two-toned image. We first convert the image to grayscale. Then, we interpolate between a dark color and a light color based on the brightness of each pixel.

ASCII Art

The ASCII art filter converts the image to ASCII characters. We first convert the image to grayscale. Then, for each pixel, we map its brightness to a character from a character ramp.

const ascii = asciiArt(imageData, '@%#*+=-:. ');

How It Works: A Deeper Look

Here's a breakdown of how the Image Toolkit is built, covering the React structure, canvas manipulation, and the filter algorithms.

1. React Component Structure

The ImageToolkitPage is a functional React component that uses hooks to manage its state and behavior:

  • useState: This hook manages the application's state:
    • image: Stores the uploaded image as a data URL.
    • activeEffect: Tracks the currently selected filter.
    • blurAmount: Holds the value for the blur filter's intensity.
    • asciiArtOutput: Stores the generated ASCII art string.
  • useRef: This provides direct references to the <canvas> and the original <img> elements in the DOM.
  • useEffect: This is the core of the image processing. It runs whenever the image or activeEffect state changes, triggering the drawing and filtering logic.
  • useToast: This is a custom hook used to display toast notifications for actions like copying text.

2. Image Upload

When you click "Select Image," the handleImageUpload function is triggered. It uses the browser's FileReader API to read the selected image file and convert it into a data URL. This data URL is then stored in the image state, which causes the component to re-render and display the uploaded image.

3. Canvas and useEffect

The useEffect hook orchestrates the image manipulation:

  1. It waits for an image to be present and for the canvasRef to be attached to the canvas element.
  2. It gets the 2D rendering context of the canvas (ctx).
  3. An Image object is created, and its src is set to the data URL of the uploaded image.
  4. In the image's onload event, the original image is drawn onto the canvas using ctx.drawImage().
  5. A series of if/else if statements checks the activeEffect state. Based on which filter is active, it calls the corresponding function to manipulate the image on the canvas.

4. Filter Implementation: Pixel-by-Pixel Manipulation

The magic of the filters happens by directly manipulating the pixel data of the canvas.

  1. getImageData(): To get the pixel data, we call ctx.getImageData(). This returns an ImageData object.
  2. ImageData.data: This object contains a data property, which is a Uint8ClampedArray. This array is a flat list of RGBA (Red, Green, Blue, Alpha) values for every pixel in the image. For example, the first four values in the array (data[0] to data[3]) represent the RGBA of the very first pixel.
  3. Manipulation: Each filter's algorithm iterates through this data array and modifies the R, G, and B values according to its logic.
  4. putImageData(): After the data array has been modified, ctx.putImageData() is called to draw the new pixel data back onto the canvas, displaying the filtered image.

For the Blur filter specifically, the stackblur-canvas library is used. It provides a highly optimized and performant blur algorithm that is much faster than a manual implementation.

5. UI and Event Handling

The user interface is built with standard React components and styled using Tailwind CSS for a clean and modern look.

  • Filter Buttons: Each filter button has an onClick event handler (e.g., handleConvertToMonochrome). When clicked, this handler updates the activeEffect state with the name of the filter. This state change triggers the useEffect hook, which then applies the selected filter's logic to the canvas.
  • Download Button: The "Download Image" button creates a temporary <a> (link) element. Its href is set to the canvas's current content as a data URL (canvas.toDataURL()), and the download attribute is set. The link is then programmatically "clicked" to initiate the download.
  • Copy Button: The "Copy" button for the ASCII art uses the modern navigator.clipboard.writeText() API to easily copy the generated ASCII string to the user's clipboard.

The useCallback and useEffect Dependency Array Error

You might have encountered this warning while developing the Image Toolkit app:

The 'toGrayscale' function makes the dependencies of useEffect Hook (at line 348) change on every render. To fix this, wrap the definition of 'toGrayscale' in its own useCallback() Hook

This warning occurs because the toGrayscale function is defined inside the ImageToolkitPage component. This means that on every render of the component, a new toGrayscale function is created. Since toGrayscale is a dependency of the useEffect hook, the hook will run on every render, causing an infinite loop.

To fix this, we can wrap the definition of toGrayscale in its own useCallback hook. The useCallback hook will memoize the function, so that it is not recreated on every render.

const toGrayscale = useCallback((imageData) => {
  // ...
}, []);

By wrapping all the image processing functions in useCallback, we can prevent the useEffect hook from running on every render and fix the infinite loop.

Read more...

]]>
<![CDATA[Picker Wheel Deep Dive]]> https://fezcode.com/blog/picker-wheel-deep-dive https://fezcode.com/blog/picker-wheel-deep-dive Sun, 09 Nov 2025 00:00:00 GMT A Deep Dive into the Picker Wheel: Canvas, React, and CSS

In this post, we'll take a deep dive into the implementation of the Picker Wheel app, a fun and interactive way to pick a random winner from a list of entries. We'll explore how it's built using React, the Canvas API, and Tailwind CSS, and we'll cover the key concepts and techniques used in its development.

You can play with it here apps::pw

The Canvas Wheel

The heart of the Picker Wheel is the wheel itself, which is drawn using the HTML5 Canvas API. The canvas provides a powerful and flexible way to draw graphics and animations, and it's perfect for creating the dynamic and interactive wheel we need.

The drawWheel function is responsible for drawing the wheel. It takes the list of entries and the current rotation angle as input, and it uses them to draw the segments of the wheel. Each segment is a pie slice, and it's filled with a unique color from a predefined color palette. The text of the entry is then drawn on top of the segment, rotated to align with the segment's angle.

// This function is responsible for drawing the wheel on the canvas.
const drawWheel = () => {
  // Get a reference to the canvas element.
  const canvas = canvasRef.current;
  // If the canvas element doesn't exist, do nothing.
  if (!canvas) return;
  // Get the 2D rendering context for the canvas.
  const ctx = canvas.getContext('2d');
  // Get the width and height of the canvas.
  const { width, height } = canvas;
  // Calculate the size of each arc (segment) of the wheel.
  const arc = 2 * Math.PI / (entries.length || 1);

  // Clear the canvas before drawing.
  ctx.clearRect(0, 0, width, height);
  // Save the current state of the canvas context.
  ctx.save();
  // Move the origin of the canvas to the center.
  ctx.translate(width / 2, height / 2);
  // Rotate the canvas by the current rotation angle.
  ctx.rotate(rotation);
  // Move the origin back to the top-left corner.
  ctx.translate(-width / 2, -height / 2);

  // Loop through each entry and draw a segment for it.
  for (let i = 0; i < entries.length; i++) {
    // Calculate the angle of the current segment.
    const angle = i * arc;
    // Set the fill style to a color from the color palette.
    ctx.fillStyle = colorPalette[i % colorPalette.length];
    // Begin a new path.
    ctx.beginPath();
    // Draw the outer arc of the segment.
    ctx.arc(width / 2, height / 2, width / 2 - 10, angle, angle + arc);
    // Draw the inner arc of the segment.
    ctx.arc(width / 2, height / 2, 0, angle + arc, angle, true);
    // Fill the segment with the current fill style.
    ctx.fill();

    // Save the current state of the canvas context.
    ctx.save();
    // Set the fill style for the text.
    ctx.fillStyle = '#000';
    // Set the font for the text.
    ctx.font = '30px Arial';
    // Move the origin to the center of the segment.
    ctx.translate(width / 2 + Math.cos(angle + arc / 2) * (width / 2 - 80), height / 2 + Math.sin(angle + arc / 2) * (height / 2 - 80));
    // Rotate the canvas to align the text with the segment.
    ctx.rotate(angle + arc / 2 + Math.PI / 2);
    // Get the text for the current entry.
    const text = entries[i];
    // Draw the text on the canvas.
    ctx.fillText(text, -ctx.measureText(text).width / 2, 0);
    // Restore the canvas context to its previous state.
    ctx.restore();
  }
  // Restore the canvas context to its previous state.
  ctx.restore();
};

React Hooks

The Picker Wheel app is built using React, and it makes extensive use of React Hooks to manage its state and side effects.

  • useState: The useState hook is used to manage the component's state, including the list of entries, the new entry input, the winner, and the spinning state.
  • useRef: The useRef hook is used to get a reference to the canvas element, which is needed for drawing the wheel. It's also used to store the ID of the animation frame, which is used to cancel the animation when the component unmounts.
  • useEffect: The useEffect hook is used to draw the wheel whenever the list of entries or the rotation angle changes.

Timers and Animation

The spinning animation is created using a combination of requestAnimationFrame and an easing function. The requestAnimationFrame function provides a smooth and efficient way to create animations, and the easing function is used to create the "fast spin then slow down" effect.

The spin function is responsible for starting the animation. It calculates the start and end rotation angles, and then it uses requestAnimationFrame to update the rotation angle on each frame of the animation. The easing function is used to calculate the new rotation angle based on the progress of the animation.

// This function creates a "fast spin then slow down" effect.
const easeOut = (t) => 1 - Math.pow(1 - t, 3);

// This function starts the spinning animation.
const spin = () => {
  // Only spin if there are more than one entry and the wheel is not already spinning.
  if (entries.length > 1 && !spinning) {
    // Set the spinning state to true.
    setSpinning(true);
    // Clear the winner.
    setWinner(null);
    // Set the duration of the animation.
    const duration = 7000;
    // Get the start time of the animation.
    const startTime = performance.now();
    // Get the start rotation of the wheel.
    const startRotation = rotation;
    // Calculate a random number of spins.
    const randomSpins = Math.random() * 5 + 5;
    // Calculate the end rotation of the wheel.
    const endRotation = startRotation + randomSpins * 2 * Math.PI;

    // This function is called on each frame of the animation.
    const animate = (currentTime) => {
      // Calculate the elapsed time.
      const elapsedTime = currentTime - startTime;
      // Calculate the progress of the animation.
      const progress = Math.min(elapsedTime / duration, 1);
      // Calculate the eased progress of the animation.
      const easedProgress = easeOut(progress);

      // Calculate the new rotation of the wheel.
      const newRotation = startRotation + (endRotation - startRotation) * easedProgress;
      // Set the new rotation of the wheel.
      setRotation(newRotation);

      // If the animation is not finished, request another frame.
      if (progress < 1) {
        animationFrameId.current = requestAnimationFrame(animate);
      } else {
        // ...
      }
    };

    // Start the animation.
    animationFrameId.current = requestAnimationFrame(animate);
  }
};

CSS and Styling

The Picker Wheel app is styled using a combination of CSS and Tailwind CSS. The CSS is used to style the wheel and the pin, and Tailwind CSS is used for the layout and the rest of the styling.

The wheel is styled with a border and a box shadow to give it a 3D look. The pin is a simple triangle created with CSS borders.

/* Style for the wheel */
.wheel {
  border-radius: 50%;
  border: 5px solid #333;
  box-shadow: 0 0 20px rgba(0,0,0,0.5);
}

/* Style for the pin */
.pin {
  position: absolute;
  top: -20px;
  left: 50%;
  transform: translateX(-50%);
  width: 0;
  height: 0;
  border-left: 15px solid transparent;
  border-right: 15px solid transparent;
  border-top: 30px solid #333;
  z-index: 10;
}

Event Handling

The Picker Wheel app uses a variety of event handlers to respond to user input.

  • onClick: The onClick event handler is used to handle clicks on the "Spin", "Add", "Delete", and "Load from List" buttons.
  • onChange: The onChange event handler is used to handle changes to the new entry input field.
  • onKeyDown: The onKeyDown event handler is used to handle key presses on the new entry input field. When the "Enter" key is pressed, the addEntry function is called.

Reading from a List

The "Load from List" feature allows users to paste a list of entries, which are then added to the picker wheel. The list is parsed using the newline character as a delimiter, and the entries are added to the list of entries. The number of entries is limited to 30.

// This function is called when the user saves a list of entries from the modal.
const handleSaveList = (list) => {
  // Split the list into an array of entries, using the newline character as a delimiter.
  const newEntries = list.split('\n').map(entry => entry.trim()).filter(entry => entry);
  // Add the new entries to the existing list of entries, and limit the total number of entries to 30.
  setEntries([...entries, ...newEntries].slice(0, 30));
};

Tailwind CSS

The Picker Wheel app uses Tailwind CSS for its layout and styling. Tailwind CSS is a utility-first CSS framework that provides a set of low-level utility classes that can be used to build custom designs.

The app uses a flexbox layout to arrange the wheel and the entry list side-by-side. The flex and gap-8 classes are used to create the flex container and the gap between the two elements.

<!-- This is the main container for the wheel and the entry list. -->
<div class="flex gap-8">
  <!-- This is the container for the wheel. -->
  <div class="flex flex-col items-center">
    {/* ... */}
  </div>
  <!-- This is the container for the entry list. -->
  <div class="w-full max-w-xs ml-16">
    {/* ... */}
  </div>
</div>

Conclusion

The Picker Wheel app is a fun and interactive way to pick a random winner from a list of entries. It's built using a combination of React, the Canvas API, and Tailwind CSS, and it demonstrates a variety of concepts and techniques used in modern web development.

Read more...

]]>
<![CDATA[Find Minimum in Rotated Sorted Array]]> https://fezcode.com/blog/find-minimum-in-rotated-sorted-array https://fezcode.com/blog/find-minimum-in-rotated-sorted-array Sat, 08 Nov 2025 00:00:00 GMT Problem Description

Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]).

Find the minimum element.

You may assume no duplicate exists in the array.

Solution

This problem can be solved efficiently using a modified binary search approach. The key idea is to observe that if we pick an arbitrary element mid, one of the two halves (left or right) must be sorted. The minimum element will always be in the unsorted half.

  1. Initialize left to 0 and right to len(nums) - 1.
  2. While left < right: a. Calculate mid = left + (right - left) / 2. b. If nums[mid] > nums[right], it means the minimum element is in the right half (from mid + 1 to right), because the right part is unsorted. So, set left = mid + 1. c. Else (nums[mid] < nums[right]), it means the minimum element is in the left half (from left to mid), because the right part is sorted, and nums[mid] could be the minimum. So, set right = mid.
  3. Return nums[left] (or nums[right], as left and right will converge to the minimum element's index).

Code (GoLang)

package main

import "fmt"

func findMin(nums []int) int {
    left, right := 0, len(nums)-1

    for left < right {
        mid := left + (right-left)/2
        if nums[mid] > nums[right] {
            // Minimum is in the right half (mid+1 to right)
            left = mid + 1
        } else {
            // Minimum is in the left half (left to mid)
            // nums[mid] could be the minimum
            right = mid
        }
    }

    return nums[left]
}

func main() {
    fmt.Println(findMin([]int{3, 4, 5, 1, 2})) // Output: 1
    fmt.Println(findMin([]int{4, 5, 6, 7, 0, 1, 2})) // Output: 0
    fmt.Println(findMin([]int{1})) // Output: 1
    fmt.Println(findMin([]int{1, 2})) // Output: 1
    fmt.Println(findMin([]int{2, 1})) // Output: 1
}

Read more...

]]>
<![CDATA[Demystifying Tailwind CSS]]> https://fezcode.com/blog/demystifying-tailwind-css https://fezcode.com/blog/demystifying-tailwind-css Fri, 07 Nov 2025 00:00:00 GMT Demystifying Tailwind CSS in fezcodex: A Utility-First Approach

In the fezcodex project, you'll notice that our components are styled cool(!). Instead of writing custom CSS for every element, Tailwind CSS is used. This post will explain what Tailwind CSS is, how it's configured in my project, and why it's a powerful tool for building user interfaces.

Part 1: The Core Concept - Utility-First CSS

Traditionally, when styling a webpage, you might write CSS like this:

.my-button {
  background-color: blue;
  color: white;
  padding: 1rem;
  border-radius: 0.5rem;
}

And then apply it in your HTML:

<button class="my-button">Click Me</button>

Tailwind CSS takes a utility-first approach. Instead of writing custom CSS classes for every component, you apply small, single-purpose utility classes directly in your HTML (or JSX, in our case). Each class does one thing, and one thing only.

For example, the same button in Tailwind would look like this:

<button class="bg-blue-500 text-white p-4 rounded-md">Click Me</button>

Benefits of Utility-First CSS:

  • Rapid UI Development: You can build complex UIs much faster because you're not constantly switching between HTML/JSX and CSS files. All the styling happens directly in your markup.
  • Consistent Design: By using a predefined set of utility classes (which are based on a design system), it's much easier to maintain a consistent look and feel across your application.
  • No More Unused CSS: Tailwind, especially with its JIT (Just-In-Time) mode, only generates the CSS that you actually use in your project. This results in incredibly small and optimized CSS bundles, improving performance.
  • Avoid Naming Headaches: You no longer have to come up with semantic class names for every single element, which can be a surprisingly difficult task in larger projects.

Part 2: Tailwind in fezcodex - Configuration and Customization

Our project customizes Tailwind to fit its specific design needs. This is primarily managed through two files:

tailwind.config.js

This is the central configuration file for Tailwind CSS. It tells Tailwind how to behave and what custom styles to include.

  • content: This array (./src/**/*.{js,jsx,ts,tsx}) tells Tailwind which files to scan for utility classes. This is crucial for the build process to identify and include only the necessary CSS.
  • theme.extend.colors: This is where we integrate our custom color palette. You'll see it imports colors from ./src/config/colors.js. This means that any color defined in colors.js (like article: '#FA8072') becomes available as a Tailwind utility class. For example:
    • text-article will apply the article color to text.
    • bg-article will apply the article color to the background.
    • border-article will apply the article color to the border. This is why we use text-article and not just article – the text- prefix tells Tailwind what CSS property to apply the color to.
  • theme.extend.fontFamily: Similar to colors, this section allows us to define and use custom fonts (imported from ./src/config/fonts.js) throughout the project using Tailwind's font-{name} classes.
  • plugins: We use @tailwindcss/typography here. This plugin provides a set of prose classes that can be used to style raw HTML (like the content generated from Markdown files) with beautiful, readable typography, without having to manually style every heading, paragraph, and list item.

src/config/colors.js and src/config/fonts.js

These files act as our project's design token repositories. They centralize all our custom colors and font definitions, making it easy to manage and update our design system from a single source.

Part 3: How It All Comes Together - Building UI

When you look at a component like AppCard.js or WordCounterPage.js, you'll see a lot of classes directly in the JSX. For example, a card might have classes like:

<div class="bg-transparent border rounded-lg shadow-lg p-6 flex flex-col justify-between relative transform transition-all duration-300 ease-in-out hover:scale-105 hover:shadow-2xl overflow-hidden h-full">
  <!-- ... content ... -->
</div>

Let's break down a few of these:

  • bg-transparent: Sets the background to transparent.
  • border: Adds a default border.
  • rounded-lg: Applies a large border-radius.
  • shadow-lg: Adds a large box shadow.
  • p-6: Adds padding of 1.5rem on all sides.
  • flex flex-col justify-between: Configures the element as a flex container, arranging its children in a column and distributing space between them.
  • hover:scale-105 hover:shadow-2xl: These are variant classes. They apply scale-105 (makes the element 5% larger) and shadow-2xl (a larger shadow) only when the element is hovered over.
  • transition-all duration-300 ease-in-out: Ensures that changes to properties like transform (for scale) and box-shadow (for shadow) happen smoothly over 300 milliseconds.

Tailwind also makes responsive design easy with prefixes like sm:, md:, lg:, and xl:. For example, md:flex would make an element a flex container only on medium screens and larger.

Part 4: The Build Process

During development and when building for production, a tool like Craco (which sits on top of Create React App's Webpack configuration) processes your code. It uses PostCSS and the Tailwind plugin to scan all your files for Tailwind utility classes. It then generates a minimal CSS file containing only the styles corresponding to the classes you've actually used. This ensures that your final application bundle is as small and performant as possible.

Part 5: Let's Change the Color of Horizontal Rule

You can change the color of an <hr> (horizontal rule) element in Tailwind CSS using a couple of common methods:

1. Using border-color utilities (Most Common)

By default, an <hr> element is rendered as a line using its border-top or border-bottom property. You can directly apply Tailwind's border color utilities to change its color.

<!-- A simple gray HR -->
<hr class="border-gray-300" />

<!-- A red HR -->
<hr class="border-red-500" />

<!-- A thicker, blue HR -->
<hr class="border-t-4 border-blue-500" />

<!-- An HR with a custom color from your config (e.g., primary.400) -->
<hr class="border-primary-400" />

Explanation:

  • border-gray-300: Sets the border color to a light gray.
  • border-red-500: Sets the border color to red.
  • border-t-4: Makes the top border 4 pixels thick. You can use border-t, border-b, border-l, border-r or just border for all sides.
  • border-blue-500: Sets the border color to blue.
  • border-primary-400: Uses a custom color defined in your tailwind.config.js (like the primary color you have).

2. Using background-color utilities with a defined height

Another way is to treat the <hr> as a block element with a specific height and then apply a background color. This gives you more control over its appearance, especially if you want a solid block of color rather than just a border line.

<!-- A red HR with a height of 1px -->
<hr class="h-px bg-red-500 border-0" />

<!-- A thicker, blue HR -->
<hr class="h-2 bg-blue-500 border-0" />

<!-- An HR with a custom color from your config -->
<hr class="h-1 bg-primary-400 border-0" />

Explanation:

  • h-px: Sets the height to 1 pixel. You can use any height utility (e.g., h-1, h-2, h-4, etc.).
  • bg-red-500: Sets the background color to red.
  • border-0: It's important to remove the default border of the <hr> when using this method, otherwise, you might see both the border and the background color.

Choose the method that best suits your design needs. The border-color method is generally more semantic for an <hr>, but the background-color method offers more flexibility for solid bars.

Part 6: Let's Put Two <Span> Element Next To Each Other

When your parent div has the classes flex flex-col text-sm group w-max flex-grow, the flex-col class specifically instructs the flex container to arrange its direct children in a column, meaning they will stack vertically, one below the other.

To put two <span> elements next to each other (horizontally) within this flex-col parent, you need to introduce a nested flex container that is configured for a row layout.

Here's how you can do it:

<div class="flex flex-col text-sm group w-max flex-grow">
  <!-- Other content that might stack vertically due to parent's flex-col -->
  <div>Other content 1</div>
  <div>Other content 2</div>

  <!-- This is the new nested flex container for your two spans -->
  <div class="flex items-center space-x-2">
    <span>Span 1</span>
    <span>Span 2</span>
  </div>

  <!-- More content that might stack vertically -->
  <div>More content 3</div>
</div>

Explanation:

  1. flex flex-col text-sm group w-max flex-grow: This is your existing parent div. It will arrange its direct children (like "Other content 1", "Other content 2", the new nested div, and "More content 3") in a vertical column.
  2. <div class="flex items-center space-x-2">: This is the crucial part.
    • flex: This makes this new div a flex container. By default, a flex container arranges its items in a row (flex-row).
    • items-center: This vertically aligns the items (your <span>s) in the center of this nested flex container.
    • space-x-2: This Tailwind utility adds horizontal space between the direct children of this flex container (your two <span>s). You can adjust 2 to any spacing you need (e.g., space-x-1, space-x-4).

By using this nested div, you create a horizontal layout context specifically for your two <span> elements, while the overall parent div still maintains its flex-col behavior for its other children.

Conclusion

Tailwind CSS provides a powerful and efficient way to build and maintain the UI of the fezcodex project. By embracing its utility-first philosophy and leveraging its extensive configuration options, we can rapidly develop consistent, responsive, and performant user interfaces. It streamlines the styling process, allowing developers to focus more on functionality and less on managing complex CSS stylesheets.

Read more...

]]>
<![CDATA[Lowest Common Ancestor with Binary Search Tree]]> https://fezcode.com/blog/lca https://fezcode.com/blog/lca Fri, 07 Nov 2025 00:00:00 GMT Understanding Trees, Binary Search Trees, and Finding the Lowest Common Ancestor

In the world of computer science, data structures are the building blocks of efficient algorithms. One of the most fundamental and versatile data structures is the Tree.

This post will take you on a journey from the basics of trees to a specific type called a Binary Search Tree (BST), explore common algorithms used with them, and finally, solve a classic problem: finding the Lowest Common Ancestor of two nodes in a BST.

What is a Tree?

In computer science, a Tree is a hierarchical data structure that consists of nodes connected by edges.

Unlike linear data structures like arrays or linked lists, trees are non-linear and are used to represent hierarchical relationships.

Key Terminology:

  • Node: The fundamental part of a tree that stores data.
  • Edge: The connection between two nodes.
  • Root: The topmost node in a tree. It's the only node that doesn't have a parent.
  • Parent: A node that has a child node.
  • Child: A node that has a parent node.
  • Leaf: A node that does not have any children.
  • Subtree: A tree consisting of a node and its descendants.
  • Depth: The length of the path from the root to a specific node.
  • Height: The length of the longest path from a specific node to a leaf.

Trees are used in various applications, such as file systems, organization charts, and even in parsing expressions in compilers.

Binary Search Trees (BSTs)

A Binary Search Tree (BST) is a special type of binary tree where the nodes are ordered in a specific way.

This ordering makes operations like searching, insertion, and deletion very efficient.

A binary tree is a BST if it satisfies the following properties:

  1. The left subtree of a node contains only nodes with keys lesser than the node's key.
  2. The right subtree of a node contains only nodes with keys greater than the node's key.
  3. Both the left and right subtrees must also be binary search trees.

This structure ensures that for any given node, all the values in its left subtree are smaller, and all the values in its right subtree are larger.

Common Tree Algorithms

Trees have a variety of algorithms for traversal and manipulation. The most common are traversal algorithms, which visit each node in the tree exactly once.

Tree Traversal

There are two main approaches to traversing a tree:

  1. Depth-First Search (DFS): This approach explores as far as possible down one branch before backtracking. There are three common ways to perform DFS:

    • In-order Traversal: Visit the left subtree, then the root, then the right subtree. For a BST, this traversal visits the nodes in ascending order.
    • Pre-order Traversal: Visit the root, then the left subtree, then the right subtree. This is useful for creating a copy of the tree.
    • Post-order Traversal: Visit the left subtree, then the right subtree, then the root. This is useful for deleting nodes from the tree.
  2. Breadth-First Search (BFS): This approach explores the tree level by level. It visits all the nodes at a given depth before moving on to the next level. BFS is typically implemented using a queue.

More Tree Algorithms in Go

Let's explore how to implement some of these fundamental tree algorithms in Go.

Finding the Height/Depth of a Binary Tree

The height of a binary tree is the number of edges on the longest path from the root node to a leaf node. A tree with only a root node has a height of 0.

The concept is closely related to the depth of a node, which is its distance from the root. The height of the tree is, therefore, the maximum depth of any node in the tree.

We can calculate the height recursively. The height of a node is 1 plus the maximum height of its left or right subtree.

import "math"

// TreeNode definition from before
type TreeNode struct {
    Val int
    Left *TreeNode
    Right *TreeNode
}

func max(a, b int) int {
    if a > b {
        return a
    }
    return b
}

func height(node *TreeNode) int {
    if node == nil {
        return -1 // Height of a null tree is -1
    }
    leftHeight := height(node.Left)
    rightHeight := height(node.Right)
    return 1 + max(leftHeight, rightHeight)
}

DFS Traversals in Go

Here are the Go implementations for the three DFS traversal methods.

In-order Traversal

func inOrderTraversal(node *TreeNode) {
    if node == nil {
        return
    }
    inOrderTraversal(node.Left)
    fmt.Println(node.Val) // Process the node
    inOrderTraversal(node.Right)
}

Pre-order Traversal

func preOrderTraversal(node *TreeNode) {
    if node == nil {
        return
    }
    fmt.Println(node.Val) // Process the node
    preOrderTraversal(node.Left)
    preOrderTraversal(node.Right)
}

Post-order Traversal

func postOrderTraversal(node *TreeNode) {
    if node == nil {
        return
    }
    postOrderTraversal(node.Left)
    postOrderTraversal(node.Right)
    fmt.Println(node.Val) // Process the node
}

LeetCode 235: Lowest Common Ancestor of a Binary Search Tree

Now, let's apply our knowledge to a classic problem.

The Lowest Common Ancestor (LCA) of two nodes, p and q, in a tree is the lowest (i.e., deepest) node that has both p and q as descendants.

The Problem

Given a binary search tree (BST), find the lowest common ancestor (LCA) of two given nodes in the BST.

For example, consider the following BST:

      6
     / \
    2   8
   / \ / \
  0  4 7  9
    / \
   3   5
  • The LCA of nodes 2 and 8 is 6.
  • The LCA of nodes 2 and 4 is 2, since a node can be a descendant of itself.
  • The LCA of nodes 3 and 5 is 4.

The Solution

The properties of a BST make finding the LCA particularly efficient.

We can start at the root of the tree and use the values of p and q to decide where to go next.

Let's consider the current node we are at, let's call it current.

  1. If both p and q are greater than current.val, it means that the LCA must be in the right subtree. So, we move to the right child.
  2. If both p and q are less than current.val, it means that the LCA must be in the left subtree. So, we move to the left child.
  3. If one of p or q is greater than current.val and the other is less than current.val (or if one of them is equal to current.val), then current is the LCA.

This is because p and q are on opposite sides of the current node, meaning it's the split point and thus the lowest common ancestor.

We can implement this logic both iteratively and recursively.

Iterative Solution

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func lowestCommonAncestor(root, p, q *TreeNode) *TreeNode {
    current := root
    for current != nil {
        if p.Val > current.Val && q.Val > current.Val {
            current = current.Right
        } else if p.Val < current.Val && q.Val < current.Val {
            current = current.Left
        } else {
            return current
        }
    }
    return nil // Should not be reached in a valid BST
}

Recursive Solution

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func lowestCommonAncestor(root, p, q *TreeNode) *TreeNode {
    if root == nil {
        return nil
    }

    if p.Val > root.Val && q.Val > root.Val {
        return lowestCommonAncestor(root.Right, p, q)
    } else if p.Val < root.Val && q.Val < root.Val {
        return lowestCommonAncestor(root.Left, p, q)
    } else {
        return root
    }
}

Both of these solutions are very efficient, with a time complexity of O(H), where H is the height of the tree. In a balanced BST, this is O(log N), where N is the number of nodes.

The space complexity for the iterative solution is O(1), while the recursive solution has a space complexity of O(H) due to the call stack.

Alternative: Stack-Based Solution

Another way to solve this problem is by finding the path from the root to each of the two nodes, p and q. Once we have both paths, we can compare them to find the last common node, which is the LCA.

This method is more generic and would also work for a regular binary tree, but it's less efficient for a BST than the previous solutions because it requires traversing parts of the tree multiple times and uses extra space to store the paths.

Here is the implementation in Go:

// Helper function to find the path from the root to a target node
func getPath(root, target *TreeNode) []*TreeNode {
    path := []*TreeNode{}
    current := root
    for current != nil {
        path = append(path, current)
        if target.Val < current.Val {
            current = current.Left
        } else if target.Val > current.Val {
            current = current.Right
        } else {
            break // Found the target
        }
    }
    return path
}

func lowestCommonAncestorWithStacks(root, p, q *TreeNode) *TreeNode {
    pathP := getPath(root, p)
    pathQ := getPath(root, q)

    var lca *TreeNode

    // Iterate through both paths until they diverge
    for i := 0; i < len(pathP) && i < len(pathQ); i++ {
        if pathP[i] == pathQ[i] {
            lca = pathP[i]
        } else {
            break
        }
    }

    return lca
}

Conclusion

Trees and Binary Search Trees are powerful data structures that are essential for any programmer's toolkit. By understanding their properties and the algorithms that operate on them, you can solve a wide range of problems efficiently.

The Lowest Common Ancestor problem is a perfect example of how the structure of a BST can be leveraged to find an elegant and optimal solution.

Read more...

]]>
<![CDATA[Monotonic Stack with Daily Temperatures]]> https://fezcode.com/blog/monotonic-stack https://fezcode.com/blog/monotonic-stack Wed, 05 Nov 2025 00:00:00 GMT Monotonic Stack & LeetCode's "Daily Temperatures"

Let's dive into a cool data structure called the Monotonic Stack and see how it helps us solve a popular coding problem, "Daily Temperatures".

What's a Monotonic Stack?

Imagine a stack of books, but with a rule: you can only place a new book on top if it's lighter than the book already there. This creates a stack of books sorted by weight, from heaviest at the bottom to lightest at the top.

That's a monotonic stack. It's a regular stack, but it enforces a specific order on its elements – either always increasing or always decreasing.

  • Monotonically Increasing Stack: Elements are always in increasing order from bottom to top. [1, 2, 5, 8]
  • Monotonically Decreasing Stack: Elements are always in decreasing order from bottom to top. [10, 7, 4, 1]

This simple rule makes them incredibly efficient for problems where you need to find the "next greater element" or "previous smaller element" for items in a sequence.


The Problem: Daily Temperatures (LeetCode 739)

The problem is this: You're given a list of daily temperatures. For each day, you need to figure out how many days you have to wait for a warmer temperature. If no such day exists, the answer is 0.

Example:

temperatures = [73, 74, 75, 71, 69, 72, 76, 73]

Expected Output:

result = [1, 1, 4, 2, 1, 1, 0, 0]

Let's break down why:

  • For day 0 (73°), the next day (day 1) is warmer (74°). So, wait is 1 day.
  • For day 1 (74°), the next day (day 2) is warmer (75°). So, wait is 1 day.
  • For day 2 (75°), you have to wait until day 6 (76°). So, wait is 6 - 2 = 4 days.
  • ...and so on.

The Solution: Using a Monotonically Decreasing Stack

We'll use a stack to store the indices of the days. We'll keep this stack monotonically decreasing, meaning the temperatures corresponding to the indices in the stack will always be going down.

Let's trace our example: temperatures = [73, 74, 75, 71, 69, 72, 76, 73]

Initialize result = [0, 0, 0, 0, 0, 0, 0, 0] and an empty stack.

Day 0: Temp = 73

  • Stack is empty. Push index 0.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
               ^ (Current: 73)
Stack:
| 0 |  (Temp: 73)
+---+

Day 1: Temp = 74

  • 74 is warmer than temperatures[0] (73).
  • We found a warmer day for index 0!
  • Pop 0 from the stack.
  • result[0] = 1 (current index) - 0 (popped index) = 1.
  • Now the stack is empty. Push index 1.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                   ^ (Current: 74)
Stack:
| 1 |  (Temp: 74)
+---+

Result: [1, 0, 0, 0, 0, 0, 0, 0]

Day 2: Temp = 75

  • 75 is warmer than temperatures[1] (74).
  • Pop 1. result[1] = 2 - 1 = 1.
  • Stack is empty. Push index 2.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                       ^ (Current: 75)
Stack:
| 2 |  (Temp: 75)
+---+

Result: [1, 1, 0, 0, 0, 0, 0, 0]

Day 3: Temp = 71

  • 71 is not warmer than temperatures[2] (75).
  • The stack needs to stay decreasing. Push index 3.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                           ^ (Current: 71)
Stack:
| 3 |  (Temp: 71)
| 2 |  (Temp: 75)
+---+

Day 4: Temp = 69

  • 69 is not warmer than temperatures[3] (71).
  • Push index 4.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                               ^ (Current: 69)
Stack:
| 4 |  (Temp: 69)
| 3 |  (Temp: 71)
| 2 |  (Temp: 75)
+---+

Day 5: Temp = 72

  • 72 is warmer than temperatures[4] (69).
  • Pop 4. result[4] = 5 - 4 = 1.
  • 72 is warmer than temperatures[3] (71).
  • Pop 3. result[3] = 5 - 3 = 2.
  • 72 is NOT warmer than temperatures[2] (75). Stop popping.
  • Push index 5.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                                   ^ (Current: 72)
Stack:
| 5 |  (Temp: 72)
| 2 |  (Temp: 75)
+---+

Result: [1, 1, 0, 2, 1, 0, 0, 0]

Day 6: Temp = 76

  • 76 is warmer than temperatures[5] (72).
  • Pop 5. result[5] = 6 - 5 = 1.
  • 76 is warmer than temperatures[2] (75).
  • Pop 2. result[2] = 6 - 2 = 4.
  • Stack is empty. Push index 6.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                                       ^ (Current: 76)
Stack:
| 6 |  (Temp: 76)
+---+

Result: [1, 1, 4, 2, 1, 1, 0, 0]

Day 7: Temp = 73

  • 73 is not warmer than temperatures[6] (76).
  • Push index 7.
Temperatures: [73, 74, 75, 71, 69, 72, 76, 73]
                                           ^ (Current: 73)
Stack:
| 7 |  (Temp: 73)
| 6 |  (Temp: 76)
+---+

End of Loop

  • The loop finishes. Any indices left in the stack (6 and 7) don't have a warmer day after them, so their results remain 0.

Final Result: [1, 1, 4, 2, 1, 1, 0, 0]


Code Example (JavaScript)

function dailyTemperatures(temperatures) {
  const result = new Array(temperatures.length).fill(0);
  const stack = []; // We'll store indices here

  for (let i = 0; i < temperatures.length; i++) {
    // While stack is not empty AND current temp is warmer than the temp at the index on top of the stack
    while (stack.length > 0 && temperatures[i] > temperatures[stack[stack.length - 1]]) {
      const prevIndex = stack.pop();
      result[prevIndex] = i - prevIndex;
    }
    // Push the current index onto the stack
    stack.push(i);
  }

  return result;
}

Code Example (GoLang)

func dailyTemperatures(temperatures []int) []int {
    n := len(temperatures)
    result := make([]int, n)
    stack := []int{} // We'll store indices here

    for i := 0; i < n; i++ {
        // While stack is not empty AND current temp is warmer than the temp at the index on top of the stack
        for len(stack) > 0 && temperatures[i] > temperatures[stack[len(stack)-1]] {
            prevIndex := stack[len(stack)-1]
            stack = stack[:len(stack)-1] // Pop
            result[prevIndex] = i - prevIndex
        }
        // Push the current index onto the stack
        stack = append(stack, i)
    }

    return result
}

And that's how a monotonic stack gives us an elegant and efficient solution!

Read more...

]]>
<![CDATA[Weighted Quick-Union with Path Compression]]> https://fezcode.com/blog/wquwpc https://fezcode.com/blog/wquwpc Tue, 04 Nov 2025 00:00:00 GMT A Deep Dive into Weighted Quick-Union with Path Compression

In the world of computer science, some algorithms are so efficient and elegant that they feel like magic. The Weighted Quick-Union with Path Compression algorithm is one of them. It's the gold standard for solving a class of problems known as "dynamic connectivity" problems.

Let's break it down.

The Problem: Dynamic Connectivity

Imagine you have a set of objects. Over time, you're told that certain pairs of these objects are now connected. The fundamental question you want to answer, at any point, is: "Are object A and object B connected?"

This "connection" could mean anything:

  • Social Networks: Are two people in the same network of friends?
  • Computer Networks: Can two computers on a network send messages to each other?
  • Image Processing: Are two pixels part of the same contiguous region of color?
  • Maze Solving: Is there a path from the start to the end?

The data structure that handles this is often called a Union-Find or Disjoint-Set Union (DSU). It has two primary operations:

  1. union(p, q): Connect object p and object q.
  2. find(p): Find the "identifier" of the group that p belongs to. If find(p) equals find(q), then p and q are connected.

The Journey to Optimization

Let's build up to the final, optimized algorithm by looking at simpler versions first. We'll represent our objects as nodes in a forest (a collection of trees). Each tree represents a connected component. The root of the tree is the unique identifier for that component.

Attempt 1: Quick-Union

In this approach, each node has a pointer to its parent. To find the root of a node, you just follow the parent pointers until you reach a node that points to itself.

  • find(p): Follow parent[p] until you reach the root.
  • union(p, q): Find the root of p (let's call it rootP) and the root of q (rootQ). Then, simply set the parent of rootP to be rootQ.

The Problem: This can lead to very tall, skinny trees. Imagine connecting items in a line: union(0,1), union(1,2), union(2,3), ... The tree becomes a long chain. A find operation on the deepest node would have to traverse the entire chain, making it slow (O(N) in the worst case).

union(0,1), union(1,2), union(2,3)

      3
      |
      2
      |
      1
      |
      0

Attempt 2: Weighted Quick-Union (Union by Size/Rank)

To avoid creating long chains, we can be smarter about our union operation. Instead of arbitrarily connecting one root to another, let's keep track of the "size" (number of nodes) of each tree.

When we perform union(p, q), we find their roots (rootP and rootQ). We then connect the smaller tree to the root of the larger tree.

This simple change has a profound impact. It ensures our trees stay relatively short and bushy, preventing the worst-case scenario of a long chain. The maximum height of any tree is now guaranteed to be at most log(N), which makes our find operation much faster (O(log N)).

Example:

  • Tree A has 5 nodes.
  • Tree B has 2 nodes.
  • To union them, we make the root of Tree B a child of the root of Tree A. The new combined tree has a size of 7.

The Final Touch: Path Compression

We can do even better. This optimization is applied during the find operation and is incredibly clever.

When we call find(p), we traverse a path of nodes from p up to the root. After we find the root, we can go back along that same path and make every node we visited point directly to the root.

Before Path Compression: find(0) requires traversing 0 -> 1 -> 2 -> 3 -> 4 (root)

      4
      |
      3
     / \
    2   5
    |
    1
    |
    0

After Path Compression on find(0): Now, nodes 0, 1, 2, and 3 all point directly to the root, 4.

        4
      / | | \
     /  | |  \
    3   2 1   0
    |
    5

The next time we call find on any of those nodes (0, 1, 2, or 3), we'll get to the root in a single step! Over many operations, this keeps the trees incredibly flat.

The Result: Nearly Constant Time

When you combine Weighted Quick-Union with Path Compression, the performance becomes astonishingly good. The amortized time complexity for both union and find is nearly constant, often written as O(α(N)), where α(N) is the Inverse Ackermann function.

This function grows so slowly that for any input size you could possibly encounter in the real world (even larger than the number of atoms in the universe), α(N) is never greater than 5. For all practical purposes, the algorithm runs in constant time per operation.

Golang Implementation

Here is a full implementation in Go that combines both optimizations.

package main

import "fmt"

// WeightedQuickUnionPathCompression implements the union-find data structure with
// both weighting and path compression optimizations.
type WeightedQuickUnionPathCompression struct {
	// parent[i] = parent of i
	parent []int
	// size[i] = number of nodes in the subtree rooted at i
	size []int
	// count is the number of disjoint sets
	count int
}

// New initializes a new union-find data structure with n elements.
// Each element initially is in its own set.
func New(n int) *WeightedQuickUnionPathCompression {
	parent := make([]int, n)
	size := make([]int, n)
	for i := 0; i < n; i++ {
		parent[i] = i
		size[i] = 1
	}
	return &WeightedQuickUnionPathCompression{
		parent: parent,
		size:   size,
		count:  n,
	}
}

// Find returns the root of the component/set containing element p.
// It uses path compression to flatten the tree structure.
func (uf *WeightedQuickUnionPathCompression) Find(p int) int {
	// Find the root
	root := p
	for root != uf.parent[root] {
		root = uf.parent[root]
	}
	// Path compression: make every node on the path point to the root
	for p != root {
		newp := uf.parent[p]
		uf.parent[p] = root
		p = newp
	}
	return root
}

// Connected returns true if elements p and q are in the same set.
func (uf *WeightedQuickUnionPathCompression) Connected(p, q int) bool {
	return uf.Find(p) == uf.Find(q)
}

// Union merges the set containing element p with the set containing element q.
// It uses weighting (union by size) to keep the trees flat.
func (uf *WeightedQuickUnionPathCompression) Union(p, q int) {
	rootP := uf.Find(p)
	rootQ := uf.Find(q)

	if rootP == rootQ {
		return
	}

	// Weighted union: attach the smaller tree to the root of the larger tree.
	if uf.size[rootP] < uf.size[rootQ] {
		uf.parent[rootP] = rootQ
		uf.size[rootQ] += uf.size[rootP]
	} else {
		uf.parent[rootQ] = rootP
		uf.size[rootP] += uf.size[rootQ]
	}
	uf.count--
}

// Count returns the number of disjoint sets.
func (uf *WeightedQuickUnionPathCompression) Count() int {
	return uf.count
}

func main() {
	// Example Usage:
	// Consider 10 elements, 0 through 9.
	uf := New(10)
	fmt.Printf("Initial components: %d\n", uf.Count()) // 10

	uf.Union(4, 3)
	uf.Union(3, 8)
	uf.Union(6, 5)
	uf.Union(9, 4)
	uf.Union(2, 1)

	fmt.Printf("Are 8 and 9 connected? %t\n", uf.Connected(8, 9)) // true
	fmt.Printf("Are 5 and 4 connected? %t\n", uf.Connected(5, 4)) // false

	uf.Union(5, 0)
	uf.Union(7, 2)
	uf.Union(6, 1)
	uf.Union(1, 8)

	fmt.Printf("Are 5 and 4 connected now? %t\n", uf.Connected(5, 4)) // true
	fmt.Printf("Final components: %d\n", uf.Count())                  // 1
}

Conclusion

The Weighted Quick-Union with Path Compression algorithm is a testament to how clever optimizations can turn a slow, impractical solution into one that is breathtakingly fast. It's a fundamental tool in a programmer's arsenal, perfect for any problem that can be modeled as a set of objects with evolving connections. Its elegance and efficiency make it a classic and beautiful piece of computer science.

Read more...

]]>
<![CDATA[Ubuntu Once More]]> https://fezcode.com/blog/ubuntu-once-more https://fezcode.com/blog/ubuntu-once-more Wed, 29 Oct 2025 00:00:00 GMT Ubuntu Once More

Trying Ubuntu 25.10

More than once a year, I get the itch to change the Linux distro I use daily. To make this easier, I bought a Lenovo IdeaPad Slim 3 to serve as my dedicated "distrohopper" laptop.

This time, however, I took a bigger leap and installed it on my main desktop PC. I had a spare SSD full of video games, which I formatted for the occasion. I downloaded the ISO, ran balenaEtcher, and hoped for the best.

My current PC setup has two displays: one 4K and one 2K. The 2K monitor is connected via HDMI, so most Linux distros default to it as the main display. However, my 4K IPS display, connected via DisplayPort, is my actual primary. It has a 144Hz refresh rate and vibrant colors, making it perfect for my needs.

Unfortunately, Linux installations often disagree. Whenever I tried to install Ubuntu with both displays connected, the installation would abruptly fail. I spent two hours debugging the issue, but error messages, error codes, and online forums offered no clear explanation.

Finally, I spotted the word "display" in an error message. Drawing on past experiences with Linux distros, I decided to disconnect my 4K display. It worked! Ubuntu 25.10 installed successfully on my main PC.

The NTFS support is fantastic, and the EXT4 support in WSL2 is also great. It's wonderful that Windows and Linux can finally read and write to each other's filesystems.

GRUB, however, is currently a disaster. I can't edit the entries for some reason, and I don't want to risk breaking my setup, so I'm leaving it alone for now. I might look into it tomorrow...

What to do after installation

The first thing I did was install zsh. For reasons I can't quite explain, I always install oh-my-zsh and git right away. Here's a list of my essential (not really) apps:

  • Zen Browser (my current favorite)
  • Zed
  • CLion (Linux is the only platform I use for C++ development)
  • VSCode
  • Sublime Text (Zed has mostly replaced it for now)
  • Insomnia (and Postman)
  • Spotify
  • Youtube Music (pear)
  • VLC
  • Obsidian
  • dotnet
  • go + golangci-lint
  • node.js runtime
  • GH CLI
  • Gemini CLI
  • GNOME Tweak Tool
  • GNOME Extensions
    • Apps Menu
    • ISO Clock
  • Grub Customizer (which doesn't seem to be working)

Why

I'm a Debian fan who loves using Fedora. I know it sounds weird, but it's true. Fedora has always been the only OS that works as seamlessly with my peripherals as Windows. I've tried to install Debian on every machine I've owned but could never get it to run properly. So, as a Debian enthusiast, I enjoy trying its different flavours. For some reason, Ubuntu just works. I'm currently very happy with my setup. :yay:

Read more...

]]>
<![CDATA[Project Overview]]> https://fezcode.com/blog/project-overview https://fezcode.com/blog/project-overview Sat, 25 Oct 2025 00:00:00 GMT 001 - Project Overview: Fezcode

This document provides a high-level overview of the "Fezcode" project, a React-based web application designed to serve as a personal blog or portfolio site.

Purpose

The primary purpose of this project is to display blog posts, projects, and other content in a structured and visually appealing manner. It leverages modern web technologies to create a dynamic and responsive user experience.

Key Technologies

The project is built using the following core technologies:

  • React: A JavaScript library for building user interfaces. It allows for the creation of reusable UI components and manages the state of the application efficiently.
  • Create React App (CRA) with Craco: The project was likely bootstrapped using Create React App, which provides a solid foundation for React development. Craco (Create React App Configuration Override) is used to customize the Webpack and Babel configurations without ejecting from CRA, enabling features like Tailwind CSS integration.
  • Tailwind CSS: A utility-first CSS framework that allows for rapid UI development by composing pre-defined CSS classes directly in the markup.
  • React Router DOM: A library for handling client-side routing in React applications, enabling navigation between different pages without full page reloads.
  • Framer Motion: A production-ready motion library for React, used for animations and interactive elements.
  • Phosphor Icons / React Icons: Libraries providing a collection of customizable SVG icons.
  • Markdown: Blog post content is written in Markdown and rendered using react-markdown.
  • Syntax Highlighting: Code blocks within Markdown are highlighted using react-syntax-highlighter.
  • GitHub Pages: The application is deployed to GitHub Pages, a static site hosting service.

Project Structure Highlights

The project follows a typical React application structure, with key directories including:

  • public/: Contains static assets like index.html, images, and the raw content for blog posts (posts/), logs (logs/), and projects (projects/).
  • src/: Contains the main application source code, organized into:
    • components/: Reusable UI components (e.g., Navbar, Footer, Toast).
    • pages/: Page-level components that represent different views of the application (e.g., HomePage, BlogPostPage, NotFoundPage).
    • hooks/: Custom React hooks for encapsulating reusable logic (e.g., useToast).
    • utils/: Utility functions and helpers.
    • styles/: Custom CSS files.
    • config/: Configuration files (e.g., colors, fonts).
  • scripts/: Contains utility scripts, such as generateWallpapers.js.

How it Works (High-Level)

  1. Entry Point (src/index.js): The application starts by rendering the main App component into the index.html file.
  2. Main Application (src/App.js): The App component sets up client-side routing using HashRouter, defines the overall layout, and manages global contexts like the ToastContext.
  3. Routing (react-router-dom): AnimatedRoutes (likely a component that uses react-router-dom's Routes and Route components) handles mapping URLs to specific page components.
  4. Content Fetching: Blog posts and other dynamic content are fetched from .txt files located in the public/ directory. Metadata for these posts is often stored in corresponding .json files (e.g., public/posts/posts.json). The blog page now includes a search functionality to easily find posts by title or slug.
  5. Styling (Tailwind CSS): The UI is styled primarily using Tailwind CSS utility classes, with some custom CSS if needed.
  6. Deployment: The application is built into static assets and deployed to GitHub Pages using the gh-pages package.

This overview provides a foundational understanding of the Fezcode project. Subsequent documents will delve into more specific details of each component and concept.

Read more...

]]>
<![CDATA[Package Json Explained]]> https://fezcode.com/blog/package-json-explained https://fezcode.com/blog/package-json-explained Sat, 25 Oct 2025 00:00:00 GMT 002 - package.json Explained

The package.json file is a crucial part of any Node.js project, including React applications. It acts as a manifest for the project, listing its metadata, scripts, and dependencies. Let's break down the key sections of this project's package.json.

{
  "name": "fezcodex",
  "version": "0.1.0",
  "private": true,
  "homepage": "https://fezcode.com",
  "dependencies": {
    "@phosphor-icons/react": "^2.1.10",
    "@testing-library/dom": "^10.4.1",
    "@testing-library/jest-dom": "^6.9.1",
    "@testing-library/react": "^16.3.0",
    "@testing-library/user-event": "^13.5.0",
    "framer-motion": "^12.23.24",
    "front-matter": "^4.0.2",
    "react": "^19.2.0",
    "react-dom": "^19.2.0",
    "react-icons": "^5.5.0",
    "react-markdown": "^10.1.0",
    "react-router-dom": "^7.9.4",
    "react-scripts": "5.0.1",
    "react-slick": "^0.31.0",
    "react-syntax-highlighter": "^15.6.6",
    "slick-carousel": "^1.8.1",
    "web-vitals": "^2.1.4"
  },
  "scripts": {
    "prestart": "node scripts/generateWallpapers.js",
    "start": "craco start",
    "prebuild": "node scripts/generateWallpapers.js",
    "build": "craco build",
    "test": "craco test",
    "eject": "react-scripts eject",
    "lint": "eslint \"src/**/*.{js,jsx}\" \"scripts/**/*.js\" --fix",
    "format": "prettier --write \"src/**/*.{js,jsx,css,json}\"",
    "predeploy": "npm run build",
    "deploy": "gh-pages -d build -b gh-pages"
  },
  "eslintConfig": {
    "extends": [
      "react-app",
      "react-app/jest"
    ]
  },
  "browserslist": {
    "production": [
      ">0.2%",
      "not dead",
      "not op_mini all"
    ],
    "development": [
      "last 1 chrome version",
      "last 1 firefox version",
      "last 1 safari version"
    ]
  },
  "devDependencies": {
    "@craco/craco": "^7.1.0",
    "@tailwindcss/typography": "^0.5.19",
    "autoprefixer": "^10.4.21",
    "cross-env": "^10.1.0",
    "gh-pages": "^6.3.0",
    "postcss": "^8.5.6",
    "prettier": "^3.6.2",
    "tailwindcss": "^3.4.18"
  }
}

Top-Level Fields

  • name: "fezcodex" - The name of the project. This is often used for npm packages and identifies your project.
  • version: "0.1.0" - The current version of the project. Follows semantic versioning (major.minor.patch).
  • private: true - Indicates that the package is not intended to be published to a public npm registry. This is common for application-level projects.
  • homepage: "https://fezcode.com" - Specifies the homepage URL for the project. For applications deployed to GitHub Pages, this is often the live URL.

dependencies

This section lists all the packages required by the application to run in production. These are core libraries that your code directly uses.

  • @phosphor-icons/react: Provides a flexible icon library with a focus on consistency and customization.
  • @testing-library/dom, @testing-library/jest-dom, @testing-library/react, @testing-library/user-event: These are testing utilities that facilitate writing user-centric tests for React components. They help ensure the application behaves as expected from a user's perspective.
  • framer-motion: A powerful and easy-to-use library for creating animations and interactive elements in React applications.
  • front-matter: A utility for parsing front-matter (metadata) from strings, typically used with Markdown files.
  • react: The core React library itself.
  • react-dom: Provides DOM-specific methods that enable React to interact with the web browser's DOM.
  • react-icons: Another popular library offering a wide range of customizable SVG icons from various icon packs.
  • react-markdown: A React component that securely renders Markdown as React elements, allowing you to display Markdown content in your application.
  • react-router-dom: The standard library for client-side routing in React applications, allowing navigation between different views.
  • react-scripts: A package from Create React App that provides scripts for common development tasks like starting a development server, building for production, and running tests.
  • react-slick / slick-carousel: Libraries used for creating carousels or sliders, likely for displaying image galleries or testimonials.
  • react-syntax-highlighter: A component that enables syntax highlighting for code blocks, often used in conjunction with react-markdown to display code snippets beautifully.
  • web-vitals: A library for measuring and reporting on a set of standardized metrics that reflect the real-world user experience on your website.

scripts

This object defines a set of command-line scripts that can be executed using npm run <script-name>. These automate common development and deployment tasks.

  • prestart: "node scripts/generateWallpapers.js" - A pre-script hook that runs before the start script. In this case, it executes a Node.js script to generate wallpapers, likely for dynamic backgrounds or assets.
  • start: "craco start" - Starts the development server. craco (Create React App Configuration Override) is used here to allow customizing the underlying Webpack/Babel configuration of react-scripts without ejecting the CRA setup.
  • prebuild: "node scripts/generateWallpapers.js" - Similar to prestart, this runs before the build script, ensuring assets are generated before the production build.
  • build: "craco build" - Creates a production-ready build of the application, optimizing and bundling all assets for deployment.
  • test: "craco test" - Runs the project's test suite.
  • eject: "react-scripts eject" - This is a one-way operation that removes the single build dependency from your project, giving you full control over the Webpack configuration files and build scripts. It's rarely used unless deep customization is needed.
  • lint: "eslint \"src/**/*.{js,jsx}\" \"scripts/**/*.js\" --fix" - Runs ESLint, a tool for identifying and reporting on patterns in JavaScript code to maintain code quality and style. The --fix flag attempts to automatically fix some issues.
  • format: "prettier --write \"src/**/*.{js,jsx,css,json}\"" - Runs Prettier, an opinionated code formatter, to ensure consistent code style across the project. The --write flag formats files in place.
  • predeploy: "npm run build" - Runs the build script before the deploy script, ensuring that the latest production build is created before deployment.
  • deploy: "gh-pages -d build -b gh-pages" - Deploys the build directory to the gh-pages branch of the GitHub repository, facilitating hosting on GitHub Pages.

eslintConfig

This field configures ESLint. "extends": ["react-app", "react-app/jest"] means it's extending the recommended ESLint configurations provided by Create React App, along with specific rules for Jest testing.

browserslist

This field specifies the target browsers for your client-side code. This is used by tools like Babel and Autoprefixer to ensure your JavaScript and CSS are compatible with the specified browser versions.

  • production: Defines the browser targets for the production build (e.g., browsers with more than 0.2% market share, excluding Internet Explorer-era browsers and Opera Mini).
  • development: Defines less strict browser targets for development, usually focusing on the latest versions of common development browsers.

devDependencies

These are packages required only for development and building the project, not for the application to run in production. They provide tools, testing utilities, and build-related functionalities.

  • @craco/craco: The main Craco package that allows overriding Create React App's Webpack configuration.
  • @tailwindcss/typography: A Tailwind CSS plugin that provides a set of prose classes to add beautiful typographic defaults to raw HTML or Markdown, improving readability of content.
  • autoprefixer: A PostCSS plugin that adds vendor prefixes to CSS rules, ensuring cross-browser compatibility.
  • cross-env: A utility that provides a universal way to set environment variables across different operating systems, commonly used in npm scripts.
  • gh-pages: A tool specifically for publishing content to the gh-pages branch on GitHub, used for deploying to GitHub Pages.
  • postcss: A tool for transforming CSS with JavaScript plugins. Tailwind CSS relies on PostCSS.
  • prettier: The code formatter used in the format script.
  • tailwindcss: The core Tailwind CSS framework, enabling utility-first styling in the project.

This package.json file provides a comprehensive insight into the project's setup, dependencies, and available scripts for development, testing, and deployment.

Read more...

]]>
<![CDATA[Index Js Entry Point]]> https://fezcode.com/blog/index-js-entry-point https://fezcode.com/blog/index-js-entry-point Sat, 25 Oct 2025 00:00:00 GMT 003 - src/index.js Entry Point Explained

src/index.js is the absolute entry point of your React application. It's the first JavaScript file that gets executed when your web page loads. Its primary responsibility is to render your root React component (App in this case) into the HTML document.

import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';

const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
  <React.StrictMode>
    <App />
  </React.StrictMode>,
);

// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals();

Line-by-Line Explanation

Imports

import React from 'react';
  • import React from 'react';: This line imports the React library. Even though you might not directly use React.createElement in JSX, importing React is traditionally required by Babel (the JavaScript compiler) to transform JSX into React.createElement calls. In newer versions of React and Babel, this might be optimized away, but it's still a common practice.
import ReactDOM from 'react-dom/client';
  • import ReactDOM from 'react-dom/client';: This imports the ReactDOM client-specific library, which provides methods to interact with the DOM (Document Object Model) in a web browser. Specifically, react-dom/client is the modern API for client-side rendering with React 18+.
import './index.css';
  • import './index.css';: This line imports the global CSS stylesheet for the application. When bundled, Webpack (or a similar tool used by Create React App/Craco) processes this import, often injecting the styles into the HTML document at runtime or extracting them into a separate CSS file.
import App from './App';
  • import App from './App';: This imports the main App component, which serves as the root of your entire React component tree. The App component will contain the application's layout, routing, and other main functionalities.
import reportWebVitals from './reportWebVitals';
  • import reportWebVitals from './reportWebVitals';: This imports a utility function that helps measure and report on your application's Web Vitals. Web Vitals are a set of metrics from Google that quantify the user experience of a web page.

Root Element Creation and Rendering

const root = ReactDOM.createRoot(document.getElementById('root'));
  • ReactDOM.createRoot(document.getElementById('root')): This is the modern way to initialize a React application for client-side rendering (React 18+). It finds the HTML element with the ID root (which is typically found in public/index.html) and creates a React root. This root object is where your React application will be attached to the DOM.
root.render(
  <React.StrictMode>
    <App />
  </React.StrictMode>,
);
  • root.render(...): This method tells React to display the App component inside the root DOM element. Whatever is rendered within root.render will be managed by React.

    • <React.StrictMode>: This is a wrapper component that helps identify potential problems in an application. It activates additional checks and warnings for its descendants during development mode. For example, it helps detect deprecated lifecycles, unexpected side effects, and more. It does not render any visible UI; it's purely a development tool.
    • <App />: This is your main application component, as imported earlier. All other components and the entire UI will be rendered as children of this App component.

Web Vitals Reporting

reportWebVitals();
  • reportWebVitals();: This function call initiates the measurement and reporting of Web Vitals metrics, which can be useful for performance monitoring and optimization. The function in reportWebVitals.js typically sends these metrics to an analytics endpoint or logs them to the console.

Summary

src/index.js is the foundational file where your React application begins its life in the browser. It sets up the bridge between your React code and the actual HTML document, ensuring your components are rendered and managed correctly, and optionally enables development tools like Strict Mode and performance monitoring with Web Vitals.

Read more...

]]>
<![CDATA[App Js Main Component]]> https://fezcode.com/blog/app-js-main-component https://fezcode.com/blog/app-js-main-component Sat, 25 Oct 2025 00:00:00 GMT 004 - src/App.js Main Component Explained

src/App.js is the main component of your React application. It acts as the root of your component tree (after index.js renders it) and is responsible for setting up global configurations like routing, layout, and context providers that are available throughout your application.

import React from 'react';
import { HashRouter as Router } from 'react-router-dom';
import Layout from './components/Layout';
import AnimatedRoutes from './components/AnimatedRoutes';
import { ToastContext } from './components/ToastContext';
import ScrollToTop from './components/ScrollToTop';

function App() {
  return (
    <Router>
      <ScrollToTop />
      <ToastContext>
        <Layout>
          <AnimatedRoutes />
        </Layout>
      </ToastContext>
    </Router>
  );
}

export default App;

Line-by-Line Explanation

Imports

import React from 'react';
  • import React from 'react';: Imports the React library, necessary for defining React components and using JSX.
import { HashRouter as Router } from 'react-router-dom';
  • import { HashRouter as Router } from 'react-router-dom';: Imports HashRouter from the react-router-dom library and renames it to Router for convenience. HashRouter uses the hash portion of the URL (e.g., /#/blog) to keep your UI in sync with the URL. This is often preferred for static site deployments like GitHub Pages because it doesn't require server-side configuration for routing.
import Layout from './components/Layout';
  • import Layout from './components/Layout';: Imports the Layout component. This component likely defines the overall structure of your application, such as headers, footers, and sidebars, and wraps the main content area.
import AnimatedRoutes from './components/AnimatedRoutes';
  • import AnimatedRoutes from './components/AnimatedRoutes';: Imports the AnimatedRoutes component. This component is responsible for defining the application's routes and likely incorporates animation for page transitions, possibly using a library like framer-motion.
import { ToastContext } from './components/ToastContext';
  • import { ToastContext } from './components/ToastContext';: Imports the ToastContext component. This component is part of React's Context API pattern. It makes a toast (a small, temporary notification) functionality available to all its child components without having to pass props down manually at every level.
import ScrollToTop from './components/ScrollToTop';
  • import ScrollToTop from './components/ScrollToTop';: Imports the ScrollToTop component. This component is typically used in conjunction with routing to automatically scroll the window to the top of the page whenever the route changes, providing a better user experience.

The App Component

function App() {
  return (
    <Router>
      <ScrollToTop />
      <ToastContext>
        <Layout>
          <AnimatedRoutes />
        </Layout>
      </ToastContext>
    </Router>
  );
}
  • function App() { ... }: This defines a functional React component named App. Functional components are the modern way to write React components and are essentially JavaScript functions that return JSX.

  • return (...): The return statement contains the JSX (JavaScript XML) that defines the UI structure for the App component.

    • <Router>: This is the HashRouter component from react-router-dom. It wraps the entire application, enabling client-side routing. Any component within this Router can use routing features like Link and useParams.

    • <ScrollToTop />: This component is rendered directly inside the Router. Its effect (scrolling to top on route change) will apply globally to the application.

    • <ToastContext>: This component wraps the Layout and AnimatedRoutes. This means that any component rendered within the Layout or AnimatedRoutes will have access to the toast functionality provided by the ToastContext via the useContext hook.

    • <Layout>: This component defines the common structure (e.g., header, footer, navigation) that will be present on most pages. It wraps the AnimatedRoutes component, meaning the routed content will be displayed within this layout.

    • <AnimatedRoutes />: This component is where the actual route definitions (e.g., /blog, /about, /projects) are handled. When the URL changes, AnimatedRoutes will render the appropriate page component (e.g., BlogPostPage, HomePage) within the Layout.

Export

export default App;
  • export default App;: This makes the App component the default export of this module, allowing it to be imported by other files (like src/index.js).

Summary

src/App.js orchestrates the main structure and global functionalities of the application. It sets up routing, provides global context for notifications, and defines the overarching layout, ensuring a consistent user experience across different pages.

Read more...

]]>
<![CDATA[Blog Post Page Component]]> https://fezcode.com/blog/blog-post-page-component https://fezcode.com/blog/blog-post-page-component Sat, 25 Oct 2025 00:00:00 GMT 005 - src/pages/BlogPostPage.js Component Explained

src/pages/BlogPostPage.js is a critical component responsible for displaying individual blog posts. It handles fetching the post content and metadata, rendering Markdown, syntax highlighting code blocks, and managing UI interactivity like copying code or opening code in a modal. It also includes navigation for series posts and robust error handling for missing content.

import React, { useState, useEffect, useRef } from 'react';
import { useParams, Link, useNavigate } from 'react-router-dom';
import ReactMarkdown from 'react-markdown';
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
import {
  ArrowSquareOut,
  ArrowsOutSimple,
  Clipboard,
  ArrowLeft,
} from '@phosphor-icons/react';
import { customTheme } from '../utils/customTheme';
import PostMetadata from '../components/PostMetadata';
import CodeModal from '../components/CodeModal';
import { useToast } from '../hooks/useToast';

// ... LinkRenderer and CodeBlock components (explained below)

const BlogPostPage = () => {
  const { slug, seriesSlug, episodeSlug } = useParams();
  const navigate = useNavigate();
  const currentSlug = episodeSlug || slug; // Use episodeSlug if present, otherwise use slug
  const [post, setPost] = useState(null);
  const [loading, setLoading] = useState(true);
  const [readingProgress, setReadingProgress] = useState(0);
  const [isAtTop, setIsAtTop] = useState(true); // New state for tracking if at top
  const contentRef = useRef(null);
  const [isModalOpen, setIsModalToOpen] = useState(false);
  const [modalContent, setModalContent] = useState('');

  const openModal = (content) => {
    setModalContent(content);
    setIsModalToOpen(true);
  };

  const closeModal = () => {
    setIsModalToOpen(false);
    setModalContent('');
  };

  useEffect(() => {
    const fetchPost = async () => {
      setLoading(true);
      console.log('Fetching post for currentSlug:', currentSlug);
      try {
        const [postContentResponse, shownPostsResponse] = await Promise.all([
          fetch(`/posts/${currentSlug}.txt`),
          fetch('/posts/shownPosts.json'),
        ]);

        console.log('postContentResponse:', postContentResponse);
        console.log('shownPostsResponse:', shownPostsResponse);

        let postBody = '';
        if (postContentResponse.ok) {
          postBody = await postContentResponse.text();
          // Check if the fetched content is actually HTML (indicating a fallback to index.html)
          if (postBody.trim().startsWith('<!DOCTYPE html>')) {
            console.error('Fetched content is HTML, not expected post content for:', currentSlug);
            navigate('/404'); // Redirect to 404 page
            return; // Stop further processing
          }
        } else {
          console.error('Failed to fetch post content for:', currentSlug);
          navigate('/404'); // Redirect to 404 page
          return; // Stop further processing
        }

        let postMetadata = null;
        let seriesPosts = [];
        if (shownPostsResponse.ok) {
          const allPosts = await shownPostsResponse.json();
          postMetadata = allPosts.find((item) => item.slug === currentSlug);

          if (postMetadata && postMetadata.series) {
            seriesPosts = allPosts
              .filter((item) => item.series === postMetadata.series)
              .sort((a, b) => a.seriesIndex - b.seriesIndex);
          }
        } else {
          console.error('Failed to fetch shownPosts.json');
        }

        console.log('postMetadata:', postMetadata);
        console.log('postBody length:', postBody.length);

        if (postMetadata && postContentResponse.ok) {
          setPost({ attributes: postMetadata, body: postBody, seriesPosts });
          console.log('Post set:', { attributes: postMetadata, body: postBody, seriesPosts });
        } else {
          setPost({ attributes: { title: 'Post not found' }, body: '' });
          console.log('Post not found or content not fetched.');
        }
      } catch (error) {
        console.error('Error fetching post or shownPosts.json:', error);
        setPost({ attributes: { title: 'Error loading post' }, body: '' });
      } finally {
        setLoading(false);
      }
    };

    fetchPost();
  }, [currentSlug]);

  useEffect(() => {
    const handleScroll = () => {
      if (contentRef.current) {
        const { scrollTop, scrollHeight, clientHeight } =
          document.documentElement;
        const totalHeight = scrollHeight - clientHeight;
        const currentProgress = (scrollTop / totalHeight) * 100;
        setReadingProgress(currentProgress);
        setIsAtTop(scrollTop === 0); // Update isAtTop based on scroll position
      }
    };

    window.addEventListener('scroll', handleScroll);
    return () => window.removeEventListener('scroll', handleScroll);
  }, [post]); // Re-attach scroll listener if post changes

  if (loading) {
    // Skeleton loading screen for BlogPostPage
    return (
      <div className="bg-gray-900 py-16 sm:py-24 animate-pulse">
        <div className="mx-auto max-w-7xl px-6 lg:px-8">
          <div className="lg:grid lg:grid-cols-4 lg:gap-8">
            <div className="lg:col-span-3">
              <div className="h-8 bg-gray-800 rounded w-1/4 mb-4"></div>
              <div className="h-12 bg-gray-800 rounded w-3/4 mb-8"></div>
              <div className="space-y-4">
                <div className="h-6 bg-gray-800 rounded w-full"></div>
                <div className="h-6 bg-gray-800 rounded w-5/6"></div>
                <div className="h-6 bg-gray-800 rounded w-full"></div>
                <div className="h-6 bg-gray-800 rounded w-2/3"></div>
              </div>
            </div>
            <div className="hidden lg:block">
              <div className="bg-gray-800 rounded-lg shadow-lg p-6">
                <div className="h-8 bg-gray-700 rounded w-1/2 mb-4"></div>
                <div className="space-y-2">
                  <div className="h-4 bg-gray-700 rounded w-full"></div>
                  <div className="h-4 bg-gray-700 rounded w-3/4"></div>
                  <div className="h-4 bg-gray-700 rounded w-1/2"></div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    );
  }

  // if (!post) { // This check is now mostly handled by the navigate('/404') above.
  //   return <div className="text-center py-16">Post not found</div>;
  // }

  // Conditional rendering for post not found after loading or if attributes are missing
  if (!post || !post.attributes || post.body === '') {
    // If post is null, or attributes are missing (e.g., from shownPosts.json), or body is empty,
    // it implies the post couldn't be fully loaded or found. Ideally, navigate would handle this.
    // This serves as a fallback display.
    return (
      <div className="text-center py-16 text-gray-400">
        <h2 className="text-3xl font-bold mb-4">Post Not Found</h2>
        <p className="text-lg">The blog post you are looking for does not exist or could not be loaded.</p>
        <Link to="/blog" className="text-primary-400 hover:underline mt-4 inline-block">Go back to Blog</Link>
      </div>
    );
  }

  const currentPostIndex = post.seriesPosts ? post.seriesPosts.findIndex(
    (item) => item.slug === currentSlug,
  ) : -1;
  const prevPost = currentPostIndex > 0 ? post.seriesPosts[currentPostIndex - 1] : null;
  const nextPost = post.seriesPosts && currentPostIndex < post.seriesPosts.length - 1
    ? post.seriesPosts[currentPostIndex + 1]
    : null;

  const backLink = seriesSlug ? `/blog/series/${seriesSlug}` : '/blog';
  const backLinkText = seriesSlug ? 'Back to Series' : 'Back to Blog';

  return (
    <div className="bg-gray-900 py-16 sm:py-24">
      <div className="mx-auto max-w-7xl px-6 lg:px-8">
        <div className="lg:grid lg:grid-cols-4 lg:gap-8">
          <div className="lg:col-span-3">
            <Link
              to={backLink}
              className="text-primary-400 hover:underline flex items-center justify-center gap-2 text-lg mb-4"
            >
              <ArrowLeft size={24} /> {backLinkText}
            </Link>
            <div
              ref={contentRef}
              className="prose prose-xl prose-dark max-w-none"
            >
              <ReactMarkdown
                components={{
                  a: LinkRenderer,
                  code: (props) => (
                    <CodeBlock {...props} openModal={openModal} />
                  ),
                }}
              >
                {post.body}
              </ReactMarkdown>
            </div>
            {(prevPost || nextPost) && (
              <div className="mt-8 flex justify-between items-center border-t border-gray-700 pt-8">
                {prevPost && (
                  <Link
                    to={seriesSlug ? `/blog/series/${seriesSlug}/${prevPost.slug}` : `/blog/${prevPost.slug}`}
                    className="text-primary-400 hover:underline flex items-center gap-2"
                  >
                    <ArrowLeft size={20} /> Previous: {prevPost.title}
                  </Link>
                )}
                {nextPost && (
                  <Link
                    to={seriesSlug ? `/blog/series/${seriesSlug}/${nextPost.slug}` : `/blog/${nextPost.slug}`}
                    className="text-primary-400 hover:underline flex items-center gap-2 ml-auto"
                  >
                    Next: {nextPost.title} <ArrowLeft size={20} className="rotate-180" />
                  </Link>
                )}
              </div>
            )}
          </div>
          <div className="hidden lg:block">
            <PostMetadata
              metadata={post.attributes}
              readingProgress={readingProgress}
              isAtTop={isAtTop}
              overrideDate={post.attributes.date}
              updatedDate={post.attributes.updated}
              seriesPosts={post.seriesPosts}
            />
          </div>
        </div>
      </div>
      <CodeModal isOpen={isModalOpen} onClose={closeModal}>
        {modalContent}
      </CodeModal>
    </div>
  );
};

export default BlogPostPage;

Read more...

]]>
<![CDATA[React Basics Components Props]]> https://fezcode.com/blog/react-basics-components-props https://fezcode.com/blog/react-basics-components-props Sat, 25 Oct 2025 00:00:00 GMT 006 - React Basics: Components and Props

At the core of React applications are components. Components are independent, reusable pieces of UI. They can be thought of as JavaScript functions that return JSX (JavaScript XML), which describes what the UI should look like. React applications are built by composing these components.

Functional Components

The project primarily uses functional components, which are JavaScript functions that accept a single props (properties) object argument and return React elements.

Example: App Component (src/App.js)

// src/App.js
import React from 'react';
// ... imports

function App() {
  return (
    <Router>
      {/* ... other components */}
      <Layout>
        <AnimatedRoutes />
      </Layout>
      {/* ... */}
    </Router>
  );
}

export default App;
  • function App() { ... }: This defines a functional component named App.
  • The return statement contains JSX, which is a syntax extension for JavaScript recommended by React to describe UI.
  • <Layout> and <AnimatedRoutes> are other components being used within App.

Example: Layout Component (src/components/Layout.js)

Let's look at src/components/Layout.js to see a slightly more complex functional component.

// src/components/Layout.js
import React, { useState, useEffect } from 'react';
import Navbar from './Navbar';
import Sidebar from './Sidebar';
import Footer from './Footer';
// ... other imports

const Layout = ({ children }) => {
  const [isSidebarOpen, setIsSidebarOpen] = useState(window.innerWidth > 768);
  // ... other state and effects

  return (
    <div className="bg-gray-950 min-h-screen font-sans flex">
      <Sidebar isOpen={isSidebarOpen} toggleSidebar={toggleSidebar} />
      <div
        className={`flex-1 flex flex-col transition-all duration-300 ${isSidebarOpen ? 'md:ml-64' : 'md:ml-0'}`}>
        <Navbar toggleSidebar={toggleSidebar} isSidebarOpen={isSidebarOpen} />
        <main className="flex-grow">{children}</main>
        <Footer />
      </div>
    </div>
  );
};

export default Layout;
  • const Layout = ({ children }) => { ... };: This defines another functional component, Layout, using an arrow function syntax. It directly destructures children from the props object. This is a common pattern.

Props (Properties)

Props are how you pass data from a parent component to a child component. They are read-only and allow components to be dynamic and reusable.

Passing Props

In the App component, you can see Layout being used:

// Inside App component's return
<Layout>
  <AnimatedRoutes />
</Layout>

Here, AnimatedRoutes is passed as a special prop called children to the Layout component. Whatever content you place between the opening and closing tags of a component becomes its children prop.

Receiving and Using Props

In the Layout component, children is received as a prop:

const Layout = ({ children }) => {
  // ...
  return (
    // ...
    <main className="flex-grow">{children}</main>
    // ...
  );
};

The Layout component then renders {children} inside its <main> tag, meaning the AnimatedRoutes (or whatever was passed as children) will be rendered in that spot.

Another example of props in Layout.js:

<Sidebar isOpen={isSidebarOpen} toggleSidebar={toggleSidebar} />
<Navbar toggleSidebar={toggleSidebar} isSidebarOpen={isSidebarOpen} />

Here:

  • The Sidebar component receives two props: isOpen (a boolean state variable) and toggleSidebar (a function).
  • The Navbar component also receives toggleSidebar and isSidebarOpen.

These props are defined in the Layout component's scope and passed down to its child components (Sidebar, Navbar) to control their behavior or appearance. For instance, isOpen might control the visibility of the sidebar, and toggleSidebar would be a function to change that visibility when a button in the Navbar is clicked.

Summary

Functional components are the building blocks of React UIs, and props are the essential mechanism for communicating data and functionality between these components in a unidirectional flow (from parent to child). This modular approach makes React applications easier to manage, test, and scale.

Read more...

]]>
<![CDATA[React Hooks Usestate Useeffect]]> https://fezcode.com/blog/react-hooks-usestate-useeffect https://fezcode.com/blog/react-hooks-usestate-useeffect Sat, 25 Oct 2025 00:00:00 GMT 007 - React Hooks: useState and useEffect

React Hooks are functions that let you "hook into" React state and lifecycle features from functional components. They allow you to use state and other React features without writing a class. The two most fundamental hooks are useState and useEffect.

useState Hook

useState is a Hook that lets you add React state to functional components. It returns a pair of values: the current state, and a function that updates it.

Syntax

const [stateVariable, setStateVariable] = useState(initialValue);
  • stateVariable: The current value of the state.
  • setStateVariable: A function to update the stateVariable. When this function is called, React will re-render the component.
  • initialValue: The initial value for the state. This can be any JavaScript data type (number, string, boolean, object, array, etc.).

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
const BlogPostPage = () => {
  // ...
  const [post, setPost] = useState(null);
  const [loading, setLoading] = useState(true);
  const [readingProgress, setReadingProgress] = useState(0);
  const [isAtTop, setIsAtTop] = useState(true); // New state for tracking if at top
  const [isModalOpen, setIsModalToOpen] = useState(false);
  const [modalContent, setModalContent] = useState('');
  // ...
};

In BlogPostPage:

  • [post, setPost] = useState(null): post will hold the blog post data (attributes, body, series posts). It's initialized to null because the data is fetched asynchronously.
  • [loading, setLoading] = useState(true): loading is a boolean that indicates whether the post data is currently being fetched. It starts as true.
  • [readingProgress, setReadingProgress] = useState(0): readingProgress stores the user's scroll progress on the page, initialized to 0.
  • [isAtTop, setIsAtTop] = useState(true): Tracks if the user is at the top of the page.
  • [isModalOpen, setIsModalToOpen] = useState(false): Controls the visibility of a modal, initialized to false (closed).
  • [modalContent, setModalContent] = useState(''): Stores the content to be displayed inside the modal.

Example from src/components/Layout.js

// src/components/Layout.js
const Layout = ({ children }) => {
  const [isSidebarOpen, setIsSidebarOpen] = useState(window.innerWidth > 768);
  // ...
};

In Layout:

  • [isSidebarOpen, setIsSidebarOpen] = useState(window.innerWidth > 768): isSidebarOpen controls the visibility of the sidebar. Its initial value depends on the window width, making the sidebar open by default on larger screens.

useEffect Hook

useEffect is a Hook that lets you perform side effects in functional components. Side effects include data fetching, subscriptions, manually changing the DOM, and other operations that interact with the outside world. It runs after every render of the component by default, but you can control when it runs using its dependency array.

Syntax

useEffect(() => {
  // Side effect code here
  return () => {
    // Cleanup function (optional)
  };
}, [dependency1, dependency2]); // Dependency array (optional)
  • First argument (function): This is where you put your side effect code. It can optionally return a cleanup function.
  • Second argument (dependency array): This array controls when the effect re-runs.
    • If omitted, the effect runs after every render.
    • If an empty array [], the effect runs only once after the initial render (like componentDidMount). The cleanup runs on unmount (like componentWillUnmount).
    • If it contains variables (e.g., [prop1, state1]), the effect runs after the initial render and whenever any of the variables in the array change.

Example from src/pages/BlogPostPage.js (Data Fetching)

// src/pages/BlogPostPage.js
useEffect(() => {
  const fetchPost = async () => {
    setLoading(true);
    // ... data fetching logic using fetch API ...
    setLoading(false);
  };

  fetchPost();
}, [currentSlug]); // Effect re-runs when currentSlug changes

This useEffect hook is responsible for fetching the blog post data.

  • It defines an async function fetchPost to handle the asynchronous data retrieval.
  • setLoading(true) is called at the start to show a loading indicator.
  • The fetch API is used to get the .txt content and shownPosts.json metadata.
  • Crucially, the dependency array [currentSlug] ensures that this effect runs only when the currentSlug (derived from the URL parameters) changes. This prevents unnecessary re-fetches and ensures the correct post is loaded when navigating between posts.

Example from src/pages/BlogPostPage.js (Scroll Event Listener)

// src/pages/BlogPostPage.js
useEffect(() => {
  const handleScroll = () => {
    if (contentRef.current) {
      const { scrollTop, scrollHeight, clientHeight } =
        document.documentElement;
      const totalHeight = scrollHeight - clientHeight;
      const currentProgress = (scrollTop / totalHeight) * 100;
      setReadingProgress(currentProgress);
      setIsAtTop(scrollTop === 0); // Update isAtTop based on scroll position
    }
  };

  window.addEventListener('scroll', handleScroll);
  return () => window.removeEventListener('scroll', handleScroll);
}, [post]); // Re-attach scroll listener if post changes

This useEffect manages a scroll event listener to calculate reading progress and determine if the user is at the top of the page.

  • It adds an event listener to the window when the component mounts or when the post state changes.
  • The return () => { ... } part is a cleanup function. This function runs when the component unmounts or before the effect re-runs due to a dependency change. It's essential here to remove the event listener to prevent memory leaks and unexpected behavior.
  • The dependency array [post] means the effect (and its cleanup) will re-run if the post object changes, ensuring the scroll listener is correctly attached to the relevant content.

Example from src/components/Layout.js (Window Resize Listener)

// src/components/Layout.js
useEffect(() => {
  const handleResize = () => {
    if (window.innerWidth <= 768) {
      setIsSidebarOpen(false);
    }
  };

  window.addEventListener('resize', handleResize);

  return () => {
    window.removeEventListener('resize', handleResize);
  };
}, []); // Empty dependency array: runs once on mount, cleans up on unmount

This useEffect in Layout.js handles the sidebar's initial state and responsiveness.

  • It adds a resize event listener to the window.
  • The handleResize function closes the sidebar if the window width drops below 768 pixels.
  • The empty dependency array [] ensures that this effect runs only once after the initial render and its cleanup function runs only when the component unmounts. This is perfect for setting up global event listeners that don't need to be re-initialized unless the component is completely removed from the DOM.

Summary

useState and useEffect are powerful tools that bring state management and side effect handling to functional components, making them as capable as class components while often being more concise and easier to reason about. Understanding their usage, especially the role of the dependency array in useEffect, is fundamental to building robust React applications.

Read more...

]]>
<![CDATA[React Context Usecontext]]> https://fezcode.com/blog/react-context-usecontext https://fezcode.com/blog/react-context-usecontext Sat, 25 Oct 2025 00:00:00 GMT 008 - React Context API and useContext

The React Context API provides a way to pass data through the component tree without having to pass props down manually at every level. This is particularly useful for global data (like user authentication, theme, or in this case, toast notifications) that many components might need access to.

The Problem Context Solves (Prop Drilling)

Imagine you have a deeply nested component tree, and a piece of data (e.g., a user object) is needed by a component several levels down. Without Context, you'd have to pass that data as a prop through every intermediate component, even if those components don't directly use the data. This is known as "prop drilling" and can make your code verbose and harder to maintain.

How Context API Works

The Context API consists of three main parts:

  1. createContext: Creates a Context object. When React renders a component that subscribes to this Context object, it will read the current context value from the closest matching Provider above it in the tree.
  2. Provider: A React component that allows consuming components to subscribe to context changes. It accepts a value prop to be passed to consuming components that are descendants of this Provider.
  3. useContext: A React Hook that lets you read context from a functional component.

Example: Toast Notification System

This project uses the Context API to manage and display toast notifications globally. Let's examine src/components/ToastContext.js and src/hooks/useToast.js.

src/components/ToastContext.js (The Provider)

import React, { createContext, useState, useCallback } from 'react';
import Toast from './Toast';

export const ToastContext = createContext();

let id = 0; // Simple counter for unique toast IDs

export const ToastContext = ({ children }) => {
  const [toasts, setToasts] = useState([]); // State to hold active toasts

  const addToast = useCallback((toast) => {
    const newToast = { ...toast, id: id++ };
    setToasts((prevToasts) => {
      if (prevToasts.length >= 5) { // Limit to 5 toasts
        const updatedToasts = prevToasts.slice(0, prevToasts.length - 1);
        return [newToast, ...updatedToasts];
      }
      return [newToast, ...prevToasts];
    });
  }, []); // Memoize addToast function

  const removeToast = useCallback((id) => {
    setToasts((prevToasts) => prevToasts.filter((toast) => toast.id !== id));
  }, []); // Memoize removeToast function

  return (
    <ToastContext.Provider value={{ addToast, removeToast }}>
      {children}
      <div className="fixed top-28 right-10 z-50">
        {toasts.map((toast) => (
          <Toast
            key={toast.id}
            id={toast.id}
            title={toast.title}
            message={toast.message}
            duration={toast.duration}
            removeToast={removeToast}
          />
        ))}
      </div>
    </ToastContext.Provider>
  );
};

Explanation:

  1. export const ToastContext = createContext();: A Context object named ToastContext is created. This object will be used by both the Provider and the Consumer.
  2. ToastContext Component: This is a functional component that will wrap parts of your application (as seen in App.js).
    • const [toasts, setToasts] = useState([]);: Manages the array of active toast notifications using useState.
    • addToast and removeToast functions: These functions are responsible for adding new toasts to the toasts array and removing them. They are wrapped in useCallback to prevent unnecessary re-creations, which is an optimization for performance.
    • <ToastContext.Provider value={{ addToast, removeToast }}>: This is the core of the Provider. It makes the addToast and removeToast functions available to any component that consumes ToastContext and is rendered within this Provider's tree. The value prop is crucial here.
    • {children}: This renders whatever components are passed as children to the ToastContext. These children (and their descendants) will have access to the context value.
    • Toast Rendering: The ToastContext also directly renders the actual Toast components based on the toasts state, positioning them in the top-right corner of the screen.

src/hooks/useToast.js (The Consumer Hook)

import { useContext } from 'react';
import { ToastContext } from '../components/ToastContext';

export const useToast = () => {
  return useContext(ToastContext);
};

Explanation:

  1. import { useContext } from 'react';: Imports the useContext Hook from React.
  2. import { ToastContext } from '../components/ToastContext';: Imports the ToastContext object that was created in ToastContext.js.
  3. export const useToast = () => { ... };: This is a custom hook. Custom hooks are a powerful feature in React that allow you to extract reusable stateful logic from components. This useToast hook simplifies consuming the ToastContext.
  4. return useContext(ToastContext);: This line is where the magic happens. When useContext(ToastContext) is called, React looks up the component tree for the closest ToastContext.Provider and returns its value prop. In this case, it returns { addToast, removeToast }.

How it's Used in a Component (e.g., BlogPostPage.js)

// Inside BlogPostPage.js (or any other component that needs toasts)
import { useToast } from '../hooks/useToast';

const CodeBlock = ({ /* ... */ }) => {
  const { addToast } = useToast(); // Access addToast function

  const handleCopy = () => {
    // ... copy logic ...
    addToast({
      title: 'Success',
      message: 'Copied to clipboard!',
      duration: 3000,
    });
    // ...
  };
  // ...
};

Any component that needs to display a toast simply imports and calls useToast(), and it immediately gets access to the addToast function without needing to receive it as a prop from its parent.

Summary

The React Context API, combined with the useContext hook, provides an elegant solution for managing global state and sharing functions across your component tree, avoiding prop drilling and making your application's architecture cleaner and more maintainable. The toast notification system in this project is a prime example of its effective use.

Read more...

]]>
<![CDATA[Routing With React Router Dom]]> https://fezcode.com/blog/routing-with-react-router-dom https://fezcode.com/blog/routing-with-react-router-dom Sat, 25 Oct 2025 00:00:00 GMT 009 - Routing with react-router-dom

react-router-dom is the standard library for client-side routing in React applications. It allows you to define different URLs for different views of your application, enabling navigation without full page reloads. This project uses react-router-dom to manage its various pages like blog posts, projects, and an about page.

Core Concepts

1. HashRouter

As seen in src/App.js:

// src/App.js
import { HashRouter as Router } from 'react-router-dom';
// ...
function App() {
  return (
    <Router>
      {/* ... all other components are wrapped here */}
    </Router>
  );
}
  • Purpose: HashRouter uses the hash portion of the URL (e.g., http://localhost:3000/#/blog) to keep your UI in sync with the URL. This is particularly useful for static site hosting (like GitHub Pages) because it doesn't require any special server-side configuration to handle routing. The server always serves index.html, and the React application handles the routing based on the hash.

2. Routes and Route

These components are used to define the mapping between URL paths and the React components that should be rendered for those paths. They are typically found in a central routing component, like AnimatedRoutes.js in this project.

Example from src/components/AnimatedRoutes.js

// src/components/AnimatedRoutes.js
import React from 'react';
import { Routes, Route, useLocation } from 'react-router-dom';
import { AnimatePresence, motion } from 'framer-motion';
// ... page component imports

function AnimatedRoutes() {
  const location = useLocation();

  return (
    <AnimatePresence mode="wait">
      <Routes location={location} key={location.pathname}>
        <Route
          path="/"
          element={
            <motion.div /* ... */ >
              <HomePage />
            </motion.div>
          }
        />
        <Route
          path="/blog/:slug"
          element={
            <motion.div /* ... */ >
              <BlogPostPage />
            </motion.div>
          }
        />
        <Route
          path="*"
          element={
            <motion.div /* ... */ >
              <NotFoundPage />
            </motion.div>
          }
        />
        {/* ... other routes */}
      </Routes>
    </AnimatePresence>
  );
}

export default AnimatedRoutes;
  • Routes: This component is a container for all your Route components. It looks at the current URL and renders the first Route that matches.
    • location={location} and key={location.pathname}: These props are used in conjunction with framer-motion's AnimatePresence to enable exit animations when navigating between routes. By providing a key that changes with the path, AnimatePresence can detect when a component is being removed from the tree.
  • Route: Defines a single route.
    • path: Specifies the URL path pattern. Examples:
      • "/": Matches the root URL.
      • "/blog": Matches /blog.
      • "/blog/:slug": Matches /blog/any-value. The :slug part is a URL parameter, meaning any-value will be captured and made available to the component.
      • "/blog/series/:seriesSlug/:episodeSlug": Matches more complex paths with multiple parameters.
      • "*": A wildcard route that matches any path not matched by previous routes. This is typically used for a 404 (Not Found) page.
    • element: The React element (component) to render when the path matches. In this project, each page component is wrapped in a framer-motion motion.div to apply page transition animations.

3. useLocation Hook

// src/components/AnimatedRoutes.js
import { Routes, Route, useLocation } from 'react-router-dom';
// ...
function AnimatedRoutes() {
  const location = useLocation();
  // ...
}
  • Purpose: useLocation is a hook that returns the current location object. This object contains information about the current URL, such as pathname, search (query parameters), and hash. In AnimatedRoutes.js, it's used to provide a key to Routes for animation purposes.

4. useParams Hook

As seen in src/pages/BlogPostPage.js:

// src/pages/BlogPostPage.js
import { useParams, Link, useNavigate } from 'react-router-dom';
// ...
const BlogPostPage = () => {
  const { slug, seriesSlug, episodeSlug } = useParams();
  const currentSlug = episodeSlug || slug; // Use episodeSlug if present, otherwise use slug
  // ...
};
  • Purpose: useParams is a hook that returns an object of key/value pairs of URL parameters. For a route like path="/blog/:slug", if the URL is /blog/my-first-post, useParams() would return { slug: 'my-first-post' }.
  • Example: In BlogPostPage, it extracts slug, seriesSlug, and episodeSlug from the URL, allowing the component to fetch the correct blog post content.

5. useNavigate Hook

As seen in src/pages/BlogPostPage.js:

// src/pages/BlogPostPage.js
import { useParams, Link, useNavigate } from 'react-router-dom';
// ...
const BlogPostPage = () => {
  // ...
  const navigate = useNavigate();
  // ...
  if (postBody.trim().startsWith('<!DOCTYPE html>')) {
    console.error('Fetched content is HTML, not expected post content for:', currentSlug);
    navigate('/404'); // Redirect to 404 page
    return; // Stop further processing
  }
  // ...
};
  • Purpose: useNavigate is a hook that returns a function that lets you navigate programmatically. This is useful for actions like redirecting after a form submission, or in this case, redirecting to a 404 page when content is not found.
  • Example: In BlogPostPage, if the fetched content is determined to be an index.html fallback (indicating the actual post file was not found), navigate('/404') is called to redirect the user to the NotFoundPage.

6. Link Component

As seen in src/pages/BlogPostPage.js:

// src/pages/BlogPostPage.js
// ...
<Link
  to={backLink}
  className="text-primary-400 hover:underline flex items-center justify-center gap-2 text-lg mb-4"
>
  <ArrowLeft size={24} /> {backLinkText}
</Link>
// ...
  • Purpose: The Link component is used to create navigation links within your application. It prevents a full page reload when clicked, allowing react-router-dom to handle the navigation client-side.
  • to prop: Specifies the destination path. It can be a string or an object.

Summary

react-router-dom provides a powerful and flexible way to manage navigation in React applications. By using HashRouter, Routes, Route, useParams, useNavigate, and Link, the Fezcode project creates a seamless single-page application experience with distinct URLs for different content, including dynamic routing for blog posts and projects, and robust handling for non-existent pages.

Read more...

]]>
<![CDATA[Css And Tailwind Css]]> https://fezcode.com/blog/css-and-tailwind-css https://fezcode.com/blog/css-and-tailwind-css Sat, 25 Oct 2025 00:00:00 GMT 010 - CSS and Tailwind CSS

This project leverages a combination of traditional CSS and the utility-first framework Tailwind CSS for styling. This approach allows for both rapid development using pre-defined utility classes and fine-grained control with custom CSS when necessary.

src/index.css - Global Styles and Tailwind Directives

src/index.css serves as the main entry point for all CSS in the application. It's where Tailwind CSS is integrated and where global base styles and overrides are defined.

@tailwind base;
@tailwind components;
@tailwind utilities;

html, body {
  height: 100%;
}

body {
  margin: 0;
  background-color: #020617;
  font-family: 'Space Mono', 'JetBrains Mono', monospace, sans-serif !important;
  font-weight: 400 !important;
  font-style: normal !important;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
}

code {
  font-family:
    source-code-pro, Menlo, Monaco, Consolas, 'Courier New', monospace;
}

/* ... other custom styles and overrides ... */

:root {
  --color-dev-badge: #44403c; /* stone-700 */
  --color-takes-badge: #065f46; /* emerald-800 */
  --color-series-badge: #e11d48; /* rose-600 */
  --color-dnd-badge: #583fa3; /* violet-400 */
}

Explanation:

  • @tailwind base;: This directive injects Tailwind's base styles, which are a set of opinionated defaults that normalize browser styles and provide a solid foundation for building on.
  • @tailwind components;: This injects Tailwind's component classes. These are typically larger, more complex classes that you might extract from repeated utility patterns (though this project might not use many custom components).
  • @tailwind utilities;: This injects all of Tailwind's utility classes (e.g., flex, pt-4, text-lg, bg-gray-950). These are the core of Tailwind's utility-first approach.
  • Global CSS Resets/Defaults: After the @tailwind directives, you see standard CSS rules that apply globally:
    • html, body { height: 100%; }: Ensures the html and body elements take up the full viewport height.
    • body { ... }: Sets a default margin, background-color, font-family, font-weight, font-style, and font smoothing properties for the entire application.
    • code { ... }: Defines a specific font stack for <code> elements.
  • Custom Styles and Overrides: The file also contains custom CSS rules, such as those for .prose (likely related to the @tailwindcss/typography plugin) and specific styling for images and inline code blocks within prose content. These demonstrate how to override or extend Tailwind's defaults with custom CSS when needed.
  • CSS Variables: The :root block defines custom CSS variables (e.g., --color-dev-badge). These can be used throughout the CSS and even in JavaScript to maintain consistent theming.

tailwind.config.js - Customizing Tailwind CSS

tailwind.config.js is the configuration file for Tailwind CSS. It allows you to customize Tailwind's default theme, add new utility classes, and integrate plugins.

const defaultTheme = require('tailwindcss/defaultTheme')
const colors = require('./src/config/colors');
const fonts = require('./src/config/fonts'); // New import

/** @type {import('tailwindcss').Config} */
module.exports = {
  darkMode: 'class',
  content: [
    "./src/**/*.{js,jsx,ts,tsx}",
  ],
  theme: {
    extend: {
      fontFamily: {
        sans: ['Space Mono', ...defaultTheme.fontFamily.sans],
        mono: ['JetBrains Mono', ...defaultTheme.fontFamily.mono],
        arvo: fonts.arvo, // New custom font
        playfairDisplay: fonts.playfairDisplay, // New custom font
        inter: fonts.inter, // New custom font
      },
      colors: colors,
      typography: (theme) => ({
        dark: {
          css: {
            color: theme('colors.gray.300'),
            a: {
              color: theme('colors.primary.400'),
              '&:hover': {
                color: theme('colors.primary.600'),
              },
            },
            // ... other typography customizations
          },
        },
      }),
    },
  },
  plugins: [
    require('@tailwindcss/typography'),
  ],
}

Explanation:

  • darkMode: 'class': Configures Tailwind to use class-based dark mode. This means you can toggle dark mode by adding or removing the dark class (e.g., <html class="dark">) to an ancestor element.
  • content: This array specifies the files that Tailwind should scan for utility classes. This is crucial for Tailwind's JIT (Just-In-Time) mode, which only generates the CSS you actually use, resulting in smaller bundle sizes.
    • "./src/**/*.{js,jsx,ts,tsx}": Tells Tailwind to look for classes in all .js, .jsx, .ts, and .tsx files within the src directory.
  • theme: This is where you customize Tailwind's default design system.
    • extend: Allows you to add to Tailwind's default theme without overwriting it entirely.
      • fontFamily: Customizes font stacks. Here, Space Mono and JetBrains Mono are added, and custom fonts like arvo, playfairDisplay, and inter are integrated, likely defined in src/config/fonts.js.
      • colors: Customizes the color palette. It imports colors from src/config/colors.js, allowing for a centralized color definition.
      • typography: This section customizes the @tailwindcss/typography plugin. It defines specific styles for elements within prose content (like Markdown rendered text) for a dark theme, ensuring readability and consistent styling for headings, links, code blocks, etc.
  • plugins: This array is where you register Tailwind plugins.
    • require('@tailwindcss/typography'): Integrates the official Typography plugin, which provides a set of prose classes to style raw HTML or Markdown content with beautiful, readable typography defaults.

How it Works Together

  1. Development: When you run npm start, Tailwind's JIT engine scans your content files, generates only the necessary CSS utility classes based on your usage and tailwind.config.js customizations, and injects them into your application via src/index.css.
  2. Production Build: When you run npm run build, Tailwind purges any unused CSS, resulting in a highly optimized and small CSS bundle.
  3. Usage in Components: In your React components, you apply styles by adding Tailwind utility classes directly to your JSX elements (e.g., <div className="bg-gray-950 text-white p-4">).

This combination provides a powerful and efficient way to style modern web applications, offering both flexibility and maintainability.

Read more...

]]>
<![CDATA[Javascript Fundamentals]]> https://fezcode.com/blog/javascript-fundamentals https://fezcode.com/blog/javascript-fundamentals Sat, 25 Oct 2025 00:00:00 GMT 011 - JavaScript Fundamentals in the Project

This project heavily utilizes modern JavaScript features to build a dynamic and interactive user interface. Understanding these fundamental concepts is crucial for comprehending the codebase. This document will highlight several key JavaScript concepts with examples drawn from the project.

1. async/await for Asynchronous Operations

Asynchronous operations (like fetching data from a server) are common in web applications. async/await provides a cleaner, more readable way to handle Promises.

  • async function: A function declared with async always returns a Promise. It allows you to use the await keyword inside it.
  • await keyword: Can only be used inside an async function. It pauses the execution of the async function until the Promise it's waiting for settles (either resolves or rejects), and then resumes the async function's execution with the resolved value.

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
useEffect(() => {
  const fetchPost = async () => { // async function
    setLoading(true);
    try {
      const [postContentResponse, shownPostsResponse] = await Promise.all([ // await Promise.all
        fetch(`/posts/${currentSlug}.txt`),
        fetch('/posts/shownPosts.json'),
      ]);

      let postBody = '';
      if (postContentResponse.ok) {
        postBody = await postContentResponse.text(); // await fetch response
        // ...
      }
      // ...
    } catch (error) {
      console.error('Error fetching post or shownPosts.json:', error);
      // ...
    } finally {
      setLoading(false);
    }
  };

  fetchPost();
}, [currentSlug]);
  • The fetchPost function is declared async because it performs asynchronous network requests.
  • await Promise.all([...]) is used to wait for multiple fetch calls (which return Promises) to complete concurrently. This is more efficient than awaiting them one after another if they don't depend on each other.
  • await postContentResponse.text() waits for the response body to be fully read as text.
  • The try...catch...finally block is used for error handling and ensuring setLoading(false) is always called.

2. Promise.all for Concurrent Promises

Promise.all is a Promise combinator that takes an iterable of Promises as input and returns a single Promise. This returned Promise fulfills when all of the input's Promises have fulfilled, or rejects as soon as any of the input's Promises rejects.

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
const [postContentResponse, shownPostsResponse] = await Promise.all([
  fetch(`/posts/${currentSlug}.txt`),
  fetch('/posts/shownPosts.json'),
]);
  • Here, Promise.all is used to initiate two network requests (fetch for the post content and fetch for the metadata JSON) at the same time. The await keyword then waits for both of them to complete. The results are destructured into postContentResponse and shownPostsResponse.

3. Array Methods (filter, find, sort)

Modern JavaScript provides powerful array methods that make working with collections of data much easier and more declarative.

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
// ... inside fetchPost function
if (shownPostsResponse.ok) {
  const allPosts = await shownPostsResponse.json();
  postMetadata = allPosts.find((item) => item.slug === currentSlug); // find

  if (postMetadata && postMetadata.series) {
    seriesPosts = allPosts
      .filter((item) => item.series === postMetadata.series) // filter
      .sort((a, b) => a.seriesIndex - b.seriesIndex); // sort
  }
}
  • Array.prototype.find(): Returns the value of the first element in the provided array that satisfies the provided testing function. Otherwise, undefined is returned.
    • allPosts.find((item) => item.slug === currentSlug): Finds the first post object in allPosts whose slug property matches currentSlug.
  • Array.prototype.filter(): Creates a new array with all elements that pass the test implemented by the provided function.
    • allPosts.filter((item) => item.series === postMetadata.series): Creates a new array containing only posts that belong to the same series as the current post.
  • Array.prototype.sort(): Sorts the elements of an array in place and returns the sorted array. The default sort order is ascending, built upon converting the elements into strings, then comparing their sequences of UTF-16 code units.
    • .sort((a, b) => a.seriesIndex - b.seriesIndex): Sorts the seriesPosts array numerically based on their seriesIndex property in ascending order.

4. Object Destructuring

Object destructuring is a JavaScript expression that makes it possible to unpack values from arrays, or properties from objects, into distinct variables.

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
const { slug, seriesSlug, episodeSlug } = useParams();
// ...
  • Here, useParams() returns an object containing URL parameters. Object destructuring is used to extract the slug, seriesSlug, and episodeSlug properties directly into variables with the same names.

Example from src/components/Layout.js

// src/components/Layout.js
const Layout = ({ children }) => {
  // ...
};
  • In this functional component definition, ({ children }) is using object destructuring to directly extract the children prop from the props object that React passes to the component.

5. Ternary Operator

The ternary operator (condition ? exprIfTrue : exprIfFalse) is a shorthand for an if...else statement, often used for conditional rendering or assigning values.

Example from src/pages/BlogPostPage.js

// src/pages/BlogPostPage.js
const currentSlug = episodeSlug || slug; // Use episodeSlug if present, otherwise use slug
// ...
const backLink = seriesSlug ? `/blog/series/${seriesSlug}` : '/blog';
const backLinkText = seriesSlug ? 'Back to Series' : 'Back to Blog';
  • episodeSlug || slug: This uses the logical OR operator (||) to assign episodeSlug if it's truthy, otherwise it assigns slug. This is a common pattern for providing fallback values.
  • seriesSlug ? /blog/series/${seriesSlug} : '/blog': If seriesSlug is truthy, backLink is set to the series URL; otherwise, it defaults to the general blog URL.

Summary

These JavaScript fundamentals, including asynchronous programming with async/await and Promise.all, efficient data manipulation with array methods, concise variable assignment with object destructuring, and conditional logic with the ternary operator, are extensively used throughout the Fezcode project. Mastering these concepts is key to understanding and contributing to modern React applications.

Read more...

]]>
<![CDATA[Html Structure]]> https://fezcode.com/blog/html-structure https://fezcode.com/blog/html-structure Sat, 25 Oct 2025 00:00:00 GMT 012 - HTML Structure (public/index.html)

public/index.html is the single HTML page that serves as the entry point for your React application. When a user visits your website, this is the file their browser first loads. The React application then takes over to dynamically render content into this HTML structure.

<!DOCTYPE html>
<html lang="en" class="dark">
  <head>
    <meta charset="utf-8" />
    <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
    <link rel="icon" type="image/svg+xml" href="%PUBLIC_URL%/favicon.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <meta name="theme-color" content="#000000" />
    <meta
      name="description"
      content="codex by fezcode..."
    />
    <link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
    <link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
    <link rel="preconnect" href="https://fonts.googleapis.com">
    <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
    <link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;700&family=Space+Mono:wght@400;700&display=swap" rel="stylesheet">
    <link href="https://fonts.googleapis.com/css2?family=Arvo&family=Inter&family=Playfair+Display&display=swap" rel="stylesheet">
    <title>fezcodex</title>
  </head>
  <body class="bg-slate-950">
    <noscript>You need to enable JavaScript to run this app.</noscript>
    <div id="root"></div>
  </body>
</html>

Explanation of Key Sections

<!DOCTYPE html>

  • This declaration defines the document type to be HTML5.

<html lang="en" class="dark">

  • The root element of an HTML page.
  • lang="en": Specifies the primary language of the document content as English, which is important for accessibility and search engines.
  • class="dark": This class is likely used in conjunction with Tailwind CSS's dark mode configuration (darkMode: 'class' in tailwind.config.js). When this class is present on the <html> element, Tailwind will apply dark mode styles.

<head> Section

The <head> section contains metadata about the HTML document, which is not displayed on the web page itself but is crucial for browsers, search engines, and other web services.

  • <meta charset="utf-8" />: Specifies the character encoding for the document, ensuring proper display of various characters.
  • <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />: Links to the favicon, the small icon displayed in the browser tab or bookmark list. %PUBLIC_URL% is a placeholder that will be replaced with the public URL of your app during the build process.
  • <meta name="viewport" content="width=device-width, initial-scale=1" />: Configures the viewport for responsive design. It sets the width of the viewport to the device width and the initial zoom level to 1, ensuring the page scales correctly on different devices.
  • <meta name="theme-color" content="#000000" />: Suggests a color that browsers should use to tint the UI elements (like the address bar in mobile browsers) of the page.
  • <meta name="description" content="codex by fezcode..." />: Provides a brief, high-level description of the web page content. This is often used by search engines in search results.
  • <link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />: Specifies an icon for web clips on iOS devices.
  • <link rel="manifest" href="%PUBLIC_URL%/manifest.json" />: Links to a web app manifest file, which provides information about the web application (like name, icons, start URL) in a JSON text file. This is essential for Progressive Web Apps (PWAs).
  • <link rel="preconnect" ...> and <link href="https://fonts.googleapis.com/css2?..." rel="stylesheet">: These lines are used to preconnect to Google Fonts and import custom fonts (JetBrains Mono, Space Mono, Arvo, Inter, Playfair Display). preconnect helps establish early connections to improve font loading performance.
  • <title>fezcodex</title>: Sets the title of the HTML document, which appears in the browser tab or window title bar.

<body> Section

The <body> section contains all the content that is visible to the user.

  • <body class="bg-slate-950">: The main content area of the page. The bg-slate-950 class is a Tailwind CSS utility class that sets the background color of the body to a very dark slate color, consistent with the project's dark theme.
  • <noscript>You need to enable JavaScript to run this app.</noscript>: This content is displayed only if the user's browser has JavaScript disabled. Since React is a JavaScript library, the application cannot function without JavaScript.
  • <div id="root"></div>: This is the most crucial part for a React application. It's an empty div element with the ID root. This is the DOM node where your React application (specifically, the App component rendered by src/index.js) will be mounted and take control. All of your React components will be rendered as children of this div.

How React Mounts

As explained in 003-index-js-entry-point.md:

// src/index.js
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
  <React.StrictMode>
    <App />
  </React.StrictMode>,
);
  1. The JavaScript code in src/index.js (which is eventually bundled and loaded by the browser) finds the <div id="root"> element.
  2. ReactDOM.createRoot() creates a React root, which is the entry point for React to manage the DOM inside that element.
  3. root.render(<App />) then tells React to render your main App component (and all its children) inside this root div. From this point on, React efficiently updates and manages the content within this div based on your component's state and props.

Summary

public/index.html provides the foundational HTML structure and metadata for the web page. It's a relatively simple file because the React application dynamically generates and manages most of the visible content within the designated <div id="root">. This separation allows for a highly dynamic and interactive user experience powered by React.

Read more...

]]>
<![CDATA[Document Fetching Api]]> https://fezcode.com/blog/document-fetching-api https://fezcode.com/blog/document-fetching-api Sat, 25 Oct 2025 00:00:00 GMT 013 - Document Fetching with the fetch API

In modern web applications, fetching data from a server is a fundamental operation. The fetch API provides a powerful and flexible interface for making network requests, replacing older methods like XMLHttpRequest. This project uses fetch to retrieve blog post content and metadata.

The fetch API Basics

The fetch() method starts the process of fetching a resource from the network, returning a Promise that fulfills once the response is available. A fetch() call takes one mandatory argument, the path to the resource you want to fetch.

Basic Usage

fetch(url)
  .then(response => response.json()) // or .text(), .blob(), etc.
  .then(data => console.log(data))
  .catch(error => console.error('Error:', error));
  • fetch(url): Initiates the request. Returns a Promise that resolves to a Response object.
  • response.json() / response.text(): The Response object has methods to extract the body content. json() parses the response as JSON, while text() parses it as plain text. Both return a Promise.
  • .then(): Handles the successful resolution of a Promise.
  • .catch(): Handles any errors that occur during the fetch operation or in the subsequent .then() blocks.

Example from src/pages/BlogPostPage.js

Let's look at how fetch is used in BlogPostPage.js to get both the blog post's text content and its metadata.

// src/pages/BlogPostPage.js - inside the useEffect's fetchPost function
// ...
try {
  const [postContentResponse, shownPostsResponse] = await Promise.all([
    fetch(`/posts/${currentSlug}.txt`),
    fetch('/posts/shownPosts.json'),
  ]);

  // Handling post content response
  let postBody = '';
  if (postContentResponse.ok) { // Check if the HTTP status code is in the 200-299 range
    postBody = await postContentResponse.text(); // Extract response body as text
    // Additional check for HTML fallback content
    if (postBody.trim().startsWith('<!DOCTYPE html>')) {
      console.error('Fetched content is HTML, not expected post content for:', currentSlug);
      navigate('/404');
      return;
    }
  } else {
    console.error('Failed to fetch post content for:', currentSlug);
    navigate('/404');
    return;
  }

  // Handling metadata response
  let postMetadata = null;
  if (shownPostsResponse.ok) { // Check if the HTTP status code is in the 200-299 range
    const allPosts = await shownPostsResponse.json(); // Extract response body as JSON
    postMetadata = allPosts.find((item) => item.slug === currentSlug);
    // ... further processing of series posts
  } else {
    console.error('Failed to fetch shownPosts.json');
  }

  // Final check and state update
  if (postMetadata && postContentResponse.ok) {
    setPost({ attributes: postMetadata, body: postBody, seriesPosts });
  } else {
    setPost({ attributes: { title: 'Post not found' }, body: '' });
  }
} catch (error) {
  console.error('Error fetching post or shownPosts.json:', error);
  setPost({ attributes: { title: 'Error loading post' }, body: '' });
} finally {
  setLoading(false);
}
// ...

Explanation of fetch Usage in BlogPostPage.js:

  1. Promise.all([...]): As discussed in 011-javascript-fundamentals.md, Promise.all is used to concurrently fetch two resources:

    • fetch("/posts/${currentSlug}.txt"): Fetches the actual Markdown content of the blog post. The currentSlug is dynamically inserted into the URL.
    • fetch('/posts/shownPosts.json'): Fetches a JSON file containing metadata for all blog posts.
  2. response.ok Property: After a fetch call, the Response object has an ok property. This is a boolean that indicates whether the HTTP response status is in the 200-299 range (inclusive). It's crucial to check response.ok because fetch does not throw an error for HTTP error statuses (like 404 or 500) by default; it only throws an error for network failures.

  3. response.text() and response.json(): These methods are used to parse the response body:

    • postContentResponse.text(): Used for the .txt file, as it contains plain text (Markdown).
    • shownPostsResponse.json(): Used for the .json file, as it contains structured JSON data.
  4. Error Handling (HTTP Status):

    • If postContentResponse.ok is false (meaning the .txt file was not found or returned an error status), an error is logged, and the application navigates to the /404 page using navigate('/404').
    • A specific check if (postBody.trim().startsWith('<!DOCTYPE html>')) was added to handle the scenario where the development server might return the index.html (with a 200 status) instead of a 404 for a non-existent file. This ensures that even in such cases, the user is redirected to the 404 page.
    • If shownPostsResponse.ok is false, an error is logged, but the application doesn't navigate to 404 directly, as the post content might still be available, just without rich metadata.
  5. try...catch Block: The entire asynchronous operation is wrapped in a try...catch block. This catches any network errors (e.g., server unreachable) or errors that occur during the processing of the Promises (e.g., json() parsing error). If an error occurs, it's logged, and the post state is set to indicate an error.

  6. finally Block: The setLoading(false) call is placed in a finally block. This ensures that the loading state is always turned off, regardless of whether the fetch operation succeeded or failed.

Summary

The fetch API is a modern, Promise-based way to make network requests in JavaScript. By understanding how to use fetch with async/await, handle Response objects (especially response.ok), and implement robust error handling with try...catch, developers can effectively retrieve and process data from various sources, as demonstrated in the Fezcode project's BlogPostPage.js component.

Read more...

]]>
<![CDATA[React Custom Hooks]]> https://fezcode.com/blog/react-custom-hooks https://fezcode.com/blog/react-custom-hooks Sat, 25 Oct 2025 00:00:00 GMT 014 - React: Custom Hooks

Custom Hooks are a powerful feature in React that allow you to extract reusable stateful logic from components. They are JavaScript functions whose names start with use and that can call other Hooks. Custom Hooks solve the problem of sharing logic between components without relying on prop drilling or complex patterns like render props or higher-order components.

Why Use Custom Hooks?

  1. Reusability: Extract common logic (state, effects, context) into a single function that can be used across multiple components.
  2. Readability: Components become cleaner and easier to understand as their logic is separated from their UI concerns.
  3. Maintainability: Changes to shared logic only need to be made in one place.
  4. Testability: Logic extracted into custom hooks can often be tested more easily in isolation.

How to Create a Custom Hook

A custom Hook is a JavaScript function that:

  • Starts with the word use (e.g., useFriendStatus, useToast). This naming convention is crucial for React to know that it's a Hook and to apply the rules of Hooks (e.g., only call Hooks at the top level of a React function).
  • Can call other Hooks (e.g., useState, useEffect, useContext).
  • Can return anything: stateful values, functions, or nothing.

Example: useToast Custom Hook (src/hooks/useToast.js)

This project provides an excellent example of a custom hook: useToast. It encapsulates the logic for accessing the toast notification system's addToast and removeToast functions.

src/hooks/useToast.js

import { useContext } from 'react';
import { ToastContext } from '../components/ToastContext';

export const useToast = () => {
  return useContext(ToastContext);
};

Explanation:

  1. import { useContext } from 'react';: The custom hook itself uses another built-in Hook, useContext, to access the value provided by the ToastContext.
  2. import { ToastContext } from '../components/ToastContext';: It imports the ToastContext object, which was created in ToastContext.js.
  3. export const useToast = () => { ... };: This defines the custom hook. Its name useToast clearly indicates its purpose and follows the naming convention.
  4. return useContext(ToastContext);: The core of this hook. It retrieves the value (which contains addToast and removeToast functions) from the nearest ToastContext.Provider in the component tree and returns it. This means any component calling useToast() will receive these functions.

How useToast is Used in a Component (e.g., BlogPostPage.js)

// Inside BlogPostPage.js (or any other component that needs toasts)
import { useToast } from '../hooks/useToast';

const CodeBlock = ({ /* ... */ }) => {
  const { addToast } = useToast(); // Access addToast function

  const handleCopy = () => {
    // ... copy logic ...
    addToast({
      title: 'Success',
      message: 'Copied to clipboard!',
      duration: 3000,
    });
    // ...
  };
  // ...
};

By calling const { addToast } = useToast();, the CodeBlock component (or any other component) gains direct access to the addToast function without needing to know where ToastContext is defined or how the toast state is managed. This makes the CodeBlock component cleaner and more focused on its primary responsibility.

Another Potential Custom Hook (Conceptual Example)

Consider the scroll tracking logic in BlogPostPage.js:

// src/pages/BlogPostPage.js - inside BlogPostPage component
const [readingProgress, setReadingProgress] = useState(0);
const [isAtTop, setIsAtTop] = useState(true);
const contentRef = useRef(null);

useEffect(() => {
  const handleScroll = () => {
    if (contentRef.current) {
      const { scrollTop, scrollHeight, clientHeight } =
        document.documentElement;
      const totalHeight = scrollHeight - clientClientHeight;
      const currentProgress = (scrollTop / totalHeight) * 100;
      setReadingProgress(currentProgress);
      setIsAtTop(scrollTop === 0);
    }
  };

  window.addEventListener('scroll', handleScroll);
  return () => window.removeEventListener('scroll', handleScroll);
}, [post]);

This logic could be extracted into a custom hook, for example, useScrollProgress:

// src/hooks/useScrollProgress.js (Conceptual)
import { useState, useEffect, useRef } from 'react';

const useScrollProgress = (contentRef, dependency) => {
  const [readingProgress, setReadingProgress] = useState(0);
  const [isAtTop, setIsAtTop] = useState(true);

  useEffect(() => {
    const handleScroll = () => {
      if (contentRef.current) {
        const { scrollTop, scrollHeight, clientHeight } =
          document.documentElement;
        const totalHeight = scrollHeight - clientHeight;
        const currentProgress = (scrollTop / totalHeight) * 100;
        setReadingProgress(currentProgress);
        setIsAtTop(scrollTop === 0);
      }
    };

    window.addEventListener('scroll', handleScroll);
    return () => window.removeEventListener('scroll', handleScroll);
  }, [contentRef, dependency]); // Re-run if contentRef or dependency changes

  return { readingProgress, isAtTop };
};

export default useScrollProgress;

Then, BlogPostPage.js would become cleaner:

// src/pages/BlogPostPage.js - inside BlogPostPage component
const contentRef = useRef(null);
const { readingProgress, isAtTop } = useScrollProgress(contentRef, post);
// ...

This demonstrates how custom hooks can abstract away complex logic, making components more focused and easier to read.

Summary

Custom Hooks are a fundamental pattern in modern React development for sharing stateful logic. By following the use naming convention and leveraging other built-in Hooks, you can create highly reusable and maintainable code that enhances the overall architecture of your React applications.

Read more...

]]>
<![CDATA[How React Toasts Work in `fezcodex`]]> https://fezcode.com/blog/react-toast-explanation-in-details https://fezcode.com/blog/react-toast-explanation-in-details Sat, 25 Oct 2025 00:00:00 GMT Deep Dive: How React Toasts Work in fezcodex

Toast notifications are a staple of modern web applications. They provide non-intrusive feedback to users about the result of their actions. In the fezcodex project, we have a robust and reusable toast system. This article will break down how it works, from its architecture to the React magic that holds it all together.

Part 1: The Architecture - A Tale of Three Components

The toast system is elegantly designed around three key parts that work in harmony:

  1. ToastContext.js (The Brains): This is the central manager. It wraps our entire application, creating a "context" that any component can plug into. It holds the list of all active toasts and provides the functions (addToast, removeToast) to modify that list. It's also responsible for rendering the container where the toasts appear.

  2. useToast.js (The Public API): This is a custom React Hook that acts as a clean and simple gateway. Instead of components needing to know about the underlying context, they can just use this hook to get access to the addToast function. It's the "button" that other components press to request a toast.

  3. Toast.js (The Notification UI): This component represents a single toast message. It's responsible for its own appearance, animations, and, most importantly, its own demise. It knows how long it should be on screen and contains the logic to remove itself after its time is up.

Part 2: The Magic of useState - Where Does the State Go?

This is the crucial question. In ToastContext.js, we have this line:

const [toasts, setToasts] = useState([]);

When a component function runs, all its internal variables are created and then discarded when it's done. So how does the toasts array not just reset to [] every single time?

React Remembers.

The useState hook is a request to React to create and manage a piece of state on behalf of your component.

  1. First Render: The very first time ToastContext renders, React sees useState([]). It creates a "memory cell" for this specific component instance and puts an empty array [] inside it. It then returns that array to the component as the toasts variable.

  2. State Updates: When you call addToast, it eventually calls setToasts(...). This function doesn't change the state directly. Instead, it sends a message to React saying, "I have a new value for this state. Please update it and re-render the component."

  3. Subsequent Renders: When React re-renders ToastContext, it arrives at the useState([]) line again. But this time, React knows it has already created a state for this component. It ignores the initial value ([]) and instead provides the current value from its internal memory—the updated array of toasts.

This is the fundamental principle of React Hooks: they allow your function components to have stateful logic that persists across renders, managed by React itself.

Part 3: The Full Lifecycle of a Toast

Let's tie it all together by following a single toast from birth to death.

  1. The Call: A user performs an action in a component (e.g., the Word Counter). That component calls addToast({ title: 'Success!', ... }).

  2. The Context: The useToast hook provides the addToast function from the ToastContext's context.

  3. The State Update: The addToast function in ToastContext runs. It creates a new toast object with a unique ID and calls setToasts([newToast, ...otherToasts]).

  4. The Re-render: React receives the state update request and schedules a re-render for ToastContext.

  5. The Render: ToastContext runs again. It calls useState, and React hands it the new array containing our new toast. The component's return statement is executed, and its .map() function now loops over an array that includes the new toast.

  6. The Birth: A new <Toast /> component is rendered on the screen. It receives its id, title, message, and duration as props.

  7. The Countdown: Inside the new <Toast /> component, a useEffect hook fires. It starts a setTimeout timer for the given duration.

  8. The End: When the timer finishes, it calls the removeToast(id) function that was passed down as a prop.

  9. The Cleanup: removeToast in the ToastContext calls setToasts(...) again, this time with an array that filters out the toast with the matching ID.

  10. The Final Re-render: React processes the state update, re-renders the ToastContext, and the toast is no longer in the array. It vanishes from the screen.

Conclusion

The fezcodex toast system is a perfect microcosm of modern React development. It shows how to use Context to provide global functionality without cluttering components, and it relies on the magic of the useState hook to give components a memory that persists between renders. By letting React manage the state, we can write declarative UI that simply reacts to state changes.

Read more...

]]>
<![CDATA[React Refs Useref]]> https://fezcode.com/blog/react-refs-useref https://fezcode.com/blog/react-refs-useref Sat, 25 Oct 2025 00:00:00 GMT 015 - React: useRef Hook

The useRef Hook is a fundamental part of React that allows you to create mutable ref objects. These ref objects can hold a reference to a DOM element or any mutable value that persists across re-renders without causing a re-render when its value changes.

Why Use useRef?

useRef serves two primary purposes:

  1. Accessing the DOM directly: While React encourages a declarative approach to UI, there are times when you need to interact with the DOM directly (e.g., managing focus, text selection, media playback, or integrating with third-party DOM libraries).
  2. Storing mutable values that don't trigger re-renders: useRef can hold any mutable value, similar to an instance variable in a class component. Unlike useState, updating a ref's .current property does not trigger a re-render of the component. This is useful for storing values that need to persist across renders but whose changes don't need to be reflected in the UI immediately.

How useRef Works

useRef returns a plain JavaScript object with a single property called current. This current property can be initialized with an argument passed to useRef.

Syntax

const myRef = useRef(initialValue);
  • myRef: The ref object returned by useRef.
  • myRef.current: The actual mutable value or DOM element reference.
  • initialValue: The initial value for myRef.current.

Example: contentRef in src/pages/BlogPostPage.js

In BlogPostPage.js, useRef is used to get a direct reference to the main content div of the blog post. This reference is then used to calculate the reading progress based on scroll position.

// src/pages/BlogPostPage.js
import React, { useState, useEffect, useRef } from 'react';
// ...

const BlogPostPage = () => {
  // ...
  const contentRef = useRef(null); // Initialize contentRef with null
  // ...

  useEffect(() => {
    const handleScroll = () => {
      if (contentRef.current) { // Access the DOM element via .current
        const { scrollTop, scrollHeight, clientHeight } =
          document.documentElement;
        const totalHeight = scrollHeight - clientHeight;
        const currentProgress = (scrollTop / totalHeight) * 100;
        setReadingProgress(currentProgress);
        setIsAtTop(scrollTop === 0);
      }
    };

    window.addEventListener('scroll', handleScroll);
    return () => window.removeEventListener('scroll', handleScroll);
  }, [post]);

  return (
    // ...
    <div
      ref={contentRef} // Attach the ref to the div element
      className="prose prose-xl prose-dark max-w-none"
    >
      {/* ... Markdown content ... */}
    </div>
    // ...
  );
};

Explanation:

  1. const contentRef = useRef(null);: A ref object named contentRef is created and initialized with null. At this point, contentRef.current is null.
  2. <div ref={contentRef}>: The ref object is attached to the div element that contains the blog post's Markdown content. Once the component renders, React will set contentRef.current to point to this actual DOM div element.
  3. if (contentRef.current): Inside the useEffect's handleScroll function, contentRef.current is checked to ensure that the DOM element is available before attempting to access its properties (like scrollHeight or clientHeight).
  4. document.documentElement: While contentRef.current gives a reference to the specific content div, the scroll calculation here uses document.documentElement (the <html> element) to get the overall page scroll position and dimensions. This is a common pattern for tracking global scroll progress.

useRef vs. useState

It's important to understand when to use useRef versus useState:

Feature useState useRef
Purpose Manages state that triggers re-renders. Accesses DOM elements or stores mutable values that don't trigger re-renders.
Re-renders Updates to state variables cause component re-renders. Updates to ref.current do not cause re-renders.
Value Persistence Value persists across re-renders. Value persists across re-renders.
Mutability State is generally treated as immutable (updated via setState). ref.current is directly mutable.

When to use useRef:

  • Managing focus, text selection, or media playback.
  • Triggering imperative animations.
  • Integrating with third-party DOM libraries.
  • Storing any mutable value that you don't want to trigger a re-render when it changes (e.g., a timer ID, a previous value of a prop).

Summary

useRef provides a way to "escape" React's declarative paradigm when necessary, offering direct access to the underlying DOM or a persistent mutable storage for values that don't need to be part of the component's reactive state. It's a powerful tool for specific use cases where direct imperative manipulation or persistent non-state values are required.

Read more...

]]>
<![CDATA[React Memoization Hooks]]> https://fezcode.com/blog/react-memoization-hooks https://fezcode.com/blog/react-memoization-hooks Sat, 25 Oct 2025 00:00:00 GMT 016 - React: Memoization Hooks (useCallback, useMemo) and React.memo

In React, components re-render when their state or props change. While React is highly optimized, unnecessary re-renders can sometimes impact performance, especially for complex components or frequently updated lists. Memoization techniques help prevent these unnecessary re-renders by caching computation results or function definitions.

1. useCallback Hook

useCallback is a Hook that returns a memoized callback function. It's useful when passing callbacks to optimized child components that rely on reference equality to prevent unnecessary re-renders.

Syntax

const memoizedCallback = useCallback(
  () => {
    doSomething(a, b);
  },
  [a, b], // dependencies
);
  • The function () => { doSomething(a, b); } will only be re-created if a or b changes.

Example from src/components/ToastContext.js

// src/components/ToastContext.js
import React, { createContext, useState, useCallback } from 'react';
// ...

export const ToastContext = ({ children }) => {
  const [toasts, setToasts] = useState([]);

  const addToast = useCallback((toast) => {
    const newToast = { ...toast, id: id++ };
    setToasts((prevToasts) => {
      if (prevToasts.length >= 5) {
        const updatedToasts = prevToasts.slice(0, prevToasts.length - 1);
        return [newToast, ...updatedToasts];
      }
      return [newToast, ...prevToasts];
    });
  }, []); // Empty dependency array: addToast is created only once

  const removeToast = useCallback((id) => {
    setToasts((prevToasts) => prevToasts.filter((toast) => toast.id !== id));
  }, []); // Empty dependency array: removeToast is created only once

  return (
    <ToastContext.Provider value={{ addToast, removeToast }}>
      {/* ... */}
    </ToastContext.Provider>
  );
};

Explanation:

  • Both addToast and removeToast functions are wrapped in useCallback with an empty dependency array ([]). This means these functions are created only once when the ToastContext component first renders and will not change on subsequent re-renders.
  • This is important because addToast and removeToast are passed down as part of the value to ToastContext.Provider. If these functions were re-created on every render, any child component consuming this context and relying on reference equality (e.g., with React.memo or useMemo) might unnecessarily re-render.

2. useMemo Hook

useMemo is a Hook that returns a memoized value. It's useful for optimizing expensive calculations that don't need to be re-computed on every render.

Syntax

const memoizedValue = useMemo(() => computeExpensiveValue(a, b), [a, b]);
  • The function () => computeExpensiveValue(a, b) will only execute if a or b changes. Otherwise, it returns the previously computed value.

Conceptual Example (Not directly in project, but common use case)

Imagine a component that filters a large list based on some criteria:

function ProductList({ products, filterText }) {
  // This filtering operation can be expensive if products is a very large array
  const filteredProducts = products.filter(product =>
    product.name.includes(filterText)
  );

  // With useMemo, the filtering only re-runs if products or filterText changes
  const memoizedFilteredProducts = useMemo(() => {
    return products.filter(product =>
      product.name.includes(filterText)
    );
  }, [products, filterText]);

  return (
    <div>
      {memoizedFilteredProducts.map(product => (
        <ProductItem key={product.id} product={product} />
      ))}
    </div>
  );
}

3. React.memo (Higher-Order Component)

React.memo is a higher-order component (HOC) that memoizes a functional component. It works similarly to PureComponent for class components. If the component's props are the same as the previous render, React.memo will skip rendering the component and reuse the last rendered result.

Syntax

const MyMemoizedComponent = React.memo(MyComponent, [arePropsEqual]);
  • MyComponent: The functional component to memoize.
  • arePropsEqual (optional): A custom comparison function. If provided, React will use it to compare prevProps and nextProps. If it returns true, the component will not re-render.

Conceptual Example (Not directly in project, but common use case)

// ProductItem.js
function ProductItem({ product }) {
  console.log('Rendering ProductItem', product.name);
  return <li>{product.name}</li>;
}

export default React.memo(ProductItem);

// In ProductList component (from useMemo example)
// If ProductItem is memoized, it will only re-render if its 'product' prop changes.

Explanation:

  • By wrapping ProductItem with React.memo, React will perform a shallow comparison of its props. If the product prop (and any other props) remains the same between renders of its parent, ProductItem will not re-render, saving computational resources.

Summary

useCallback, useMemo, and React.memo are powerful tools for optimizing the performance of React applications by preventing unnecessary re-renders. They are particularly useful in scenarios involving expensive computations, frequently updated components, or when passing functions/objects as props to child components that rely on reference equality. While not every component needs memoization, understanding when and how to apply these techniques is crucial for building high-performance React applications.

Read more...

]]>
<![CDATA[The Necessary D&D Post]]> https://fezcode.com/blog/dnd-content https://fezcode.com/blog/dnd-content Tue, 21 Oct 2025 00:00:00 GMT RPG and DnD Content

When you see d&d tag, it also includes fantasy and rpg content as well.

Dungeons & Dragons is kinda fun, offering engaging narratives and strategic depth. It's a hobby that provides a welcome escape and a platform for creative exploration.

Divinity - Original Sin 2 (Main Theme)

Introduction

Let's talk about some of the D&D and RPG content I enjoy. I often find myself immersed in the detailed lore and mechanics of games like Divinity Original Sin 2, where decisions carry weight. I appreciate shows like "Secret Level," especially for their Warhammer episodes, which offer a glimpse into the creative process behind these expansive universes. My gaming collection is rich with D&D-inspired titles such as Baldur's Gate 3, Pathfinder, Warhammer 40,000: Dawn of War II, and SpellForce 3. Beyond these, I also delve into other fantasy epics like the Warcraft series, each presenting its own blend of adventure and challenge.

My connection to these worlds also includes a practical side, evident in my Warhammer figurine collection. The process of building and painting these miniatures is a satisfying aspect of the hobby, complementing the imaginative battles they represent. It's an activity that combines creativity, strategic thinking, and a sense of community, all contributing to a well-rounded pastime.

Secret Level - Warhammer Episode

D&D, RPG, and Fantasy: A Quick Dive

It's easy to use terms like D&D, RPG, and fantasy interchangeably, and honestly, I often do! While they're distinct, they share a lot of common ground, especially in the realm of immersive storytelling and world-building.

Dungeons & Dragons (D&D) is a specific tabletop role-playing game, often considered the progenitor of modern RPGs. It has its own set of rules, lore, and established settings. When I talk about D&D, I'm usually referring to this specific game or experiences directly inspired by its mechanics and themes.

Role-Playing Games (RPGs) are a broader genre where players assume the roles of characters in a fictional setting. This can be tabletop (like D&D), video games (like Divinity Original Sin 2 or Baldur's Gate 3), or even live-action. The core is character development, narrative choice, and often, strategic combat.

Fantasy is a genre of speculative fiction set in a fictional universe, often inspired by real-world myth and folklore. It typically involves magic, mythical creatures, and supernatural elements. Many RPGs and D&D campaigns are set in fantasy worlds, but not all fantasy is an RPG, and not all RPGs are fantasy (think sci-fi RPGs).

Here's a quick comparison:

Feature Dungeons & Dragons (D&D) Role-Playing Games (RPGs) Fantasy Genre
Definition Specific tabletop game Broad game genre Broad fiction genre
Ruleset Specific, detailed Varies widely N/A
Player Role Character in D&D world Character in fictional setting Reader/Viewer
Interaction High (tabletop) High (tabletop/video game) Low (passive)
Setting Typically fantasy Any (fantasy, sci-fi, etc.) Typically magical/mythical

Despite their differences, the lines blur constantly. Many fantasy stories lend themselves perfectly to RPGs, and D&D is a prime example of a fantasy RPG. For me, when I say "DND," I'm often using it as a shorthand to encompass all these wonderful aspects: the specific game, the broader genre of role-playing, and the rich, imaginative fantasy worlds they inhabit. It's all part of the same grand adventure!

Read more...

]]>
<![CDATA[Gemini Explains How Hooks Work with Toast Component]]> https://fezcode.com/blog/gemini-explains-how-hooks-work-with-toast-component https://fezcode.com/blog/gemini-explains-how-hooks-work-with-toast-component Sat, 18 Oct 2025 00:00:00 GMT Gemini Explains How Hooks Work with Toast Component

There are two version of this, first one is explanation it extracted from the system, and the second is the one I prompted it to create explanation.

First One: Gemini Explains How Hooks Work with Toast Component

This post explains the changes made to implement a toast notification system, focusing on React concepts like Hooks, Context, and Providers.

Core Concepts

Components

React applications are built using components. A component is a reusable piece of code that defines a part of the user interface.

State

Components can have state. State is data that is managed by the component. When the state of a component changes, React automatically re-renders the component to reflect the new state.

Hooks

Hooks are special functions that let you "hook into" React features. They were introduced in React to allow the use of state and other React features in functional components. Hooks are functions whose names start with use.

  • useState: This hook lets you add state to a functional component.
  • useEffect: This hook lets you perform side effects in a functional component.
  • useContext: This hook lets you subscribe to React context without introducing nesting.

Context and Providers

Context provides a way to share data between components without having to pass props down manually at every level. The Provider is a component that makes the data available to all the components in the tree below it. Any component in the tree can then access the data using the useContext hook.

Custom Hooks

A custom hook is a JavaScript function whose name starts with use and that calls other hooks. Custom hooks are a way to reuse stateful logic between components.

File-by-File Explanation of Changes

Now, let's look at the changes made to each file.

src/hooks/useToast.js

This file defines a custom hook called useToast. It uses useContext to access the ToastContext and provides a simple way for any component to access the addToast and removeToast functions.

import { useContext } from 'react';
import { ToastContext } from '../components/ToastContext';

export const useToast = () => {
  return useContext(ToastContext);
};

src/components/ToastContext.js

This file defines the ToastContext component, which manages the state of the toasts. It uses useState to hold the toasts array, and provides addToast and removeToast functions. It also renders the Toast components.

import React, { createContext, useState, useCallback } from 'react';
import Toast from './Toast';

export const ToastContext = createContext();

let id = 0;

export const ToastContext = ({ children }) => {
  const [toasts, setToasts] = useState([]);

  const addToast = useCallback((toast) => {
    const newToast = { ...toast, id: id++ };
    setToasts((prevToasts) => {
      if (prevToasts.length >= 5) {
        const updatedToasts = prevToasts.slice(0, prevToasts.length - 1);
        return [newToast, ...updatedToasts];
      }
      return [newToast, ...prevToasts];
    });
  }, []);

  const removeToast = useCallback((id) => {
    setToasts((prevToasts) => prevToasts.filter((toast) => toast.id !== id));
  }, []);

  return (
    <ToastContext.Provider value={{ addToast, removeToast }}>
      {children}
      <div className="fixed top-28 right-10 z-50">
          {toasts.map((toast) => (
            <Toast
              key={toast.id}
              id={toast.id}
              title={toast.title}
              message={toast.message}
              duration={toast.duration}
              removeToast={removeToast}
            />
          ))}
      </div>
    </ToastContext.Provider>
  );
};

src/components/Toast.js

This file defines the Toast component, which is the UI for a single toast notification. It uses useEffect to set a timer for automatic removal and framer-motion for animations.

import React, { useEffect } from 'react';
import { motion } from 'framer-motion';
import { X } from '@phosphor-icons/react';

const Toast = ({ id, title, message, duration, removeToast }) => {
  useEffect(() => {
    const timer = setTimeout(() => {
      removeToast(id);
    }, duration);

    return () => {
      clearTimeout(timer);
    };
  }, [id, duration, removeToast]);

  return (
    <motion.div
      initial={{ x: '100%', opacity: 0 }}
      animate={{ x: 0, opacity: 1 }}
      exit={{ opacity: 0 }}
      transition={{ type: 'spring', stiffness: 120, damping: 20 }}
      className="text-gray-300 py-4 px-10 rounded-lg shadow-lg border backdrop-blur-sm flex items-center justify-between w-96 mb-4"
      style={{ backgroundColor: 'rgba(68, 64, 59, 0.8)', borderColor: '#5a5e64' }}
    >
      <div className="flex flex-col text-sm">
        <span>{title}</span>
        <span>{message}</span>
      </div>
      <button onClick={() => removeToast(id)} className="pr-2">
        <X size={24} weight="bold" />
      </button>
    </motion.div>
  );
};

export default Toast;

src/App.js

The main component of the application. The ToastContext wraps the entire application to make the toast functionality available everywhere.

import React from 'react';
import { HashRouter as Router } from 'react-router-dom';
import Layout from './components/Layout';
import AnimatedRoutes from './components/AnimatedRoutes';
import { ToastContext } from './components/ToastContext';

function App() {
  return (
    <Router>
      <ToastContext>
        <Layout>
          <AnimatedRoutes />
        </Layout>
      </ToastContext>
    </Router>
  );
}

export default App;

src/pages/BlogPostPage.js

This component uses the useToast hook to show a toast notification when a code block is copied.

// ... imports
import { useToast } from '../hooks/useToast';

// ...

const CodeBlock = ({ node, inline, className, children, openModal, ...props }) => {
  const match = /language-(\w+)/.exec(className || '');
  const { addToast } = useToast();
  const handleCopy = () => {
    const textToCopy = String(children);
    if (navigator.clipboard && navigator.clipboard.writeText) {
      navigator.clipboard.writeText(textToCopy).then(() => {
        addToast({ title: 'Success', message: 'Copied to clipboard!', duration: 3000 });
      }, () => {
        addToast({ title: 'Error', message: 'Failed to copy!', duration: 3000 });
      });
    } else {
      const textArea = document.createElement('textarea');
      textArea.value = textToCopy;
      textArea.style.position = 'fixed';
      textArea.style.left = '-9999px';
      document.body.appendChild(textArea);
      textArea.focus();
      textArea.select();
      try {
        document.execCommand('copy');
        addToast({ title: 'Success', message: 'Copied to clipboard!', duration: 3000 });
      } catch (err) {
        addToast({ title: 'Error', message: 'Failed to copy!', duration: 3000 });
      }
      document.body.removeChild(textArea);
    }
  };

  // ... rest of the component
};

// ... rest of the file

Second One: Gemini Explains How Hooks Work with Toast Component

I gave Gemini the following prompt:

Pretend like I have no idea what react.js is. I want you to explain everything done in the changed files. What is useHook what are hooks and all that. The changes made to App.js, Blogpostpage.js, toast.js, toastprovider.js (and what provider is), useToast.js (what is useXXXX). Explain everything step by step to me. Write everything to explanation.md file. (not exists)

Explanation of the Toast Notification System

This document explains the changes made to the website to implement a toast notification system. The explanation is intended for someone with no prior knowledge of React.js.

Core Concepts

Before diving into the code changes, let's understand some fundamental concepts of React.

Components

React applications are built using components. A component is a reusable piece of code that defines a part of the user interface. For example, a button, a form, or a navigation bar can all be components. Components can be nested inside other components to create complex user interfaces.

State

Components can have state. State is data that is managed by the component. When the state of a component changes, React automatically re-renders the component to reflect the new state. For example, a component that displays a counter might have a count state. When the count is incremented, the component re-renders to display the new value.

Hooks

Hooks are special functions that let you "hook into" React features. They were introduced in React to allow the use of state and other React features in functional components (which are simpler than class-based components). Hooks are functions whose names start with use. Here are some common hooks:

  • useState: This hook lets you add state to a functional component.
  • useEffect: This hook lets you perform side effects in a functional component. Side effects are operations that are not directly related to rendering the UI, such as fetching data from a server or setting up a timer.
  • useContext: This hook lets you subscribe to React context without introducing nesting.

Context and Providers

In a React application, data is typically passed from parent components to child components via props. However, this can become cumbersome for data that needs to be accessed by many components at different nesting levels.

Context provides a way to share data between components without having to pass props down manually at every level. The Provider is a component that makes the data available to all the components in the tree below it. Any component in the tree can then access the data using the useContext hook.

Custom Hooks

A custom hook is a JavaScript function whose name starts with use and that calls other hooks. Custom hooks are a way to reuse stateful logic between components. For example, you could create a custom hook that fetches data from a server and returns the data and a loading state. This hook could then be used in any component that needs to fetch data.

File-by-File Explanation of Changes

Now, let's look at the changes made to each file.

src/hooks/useToast.js

This file defines a custom hook called useToast.

import { useContext } from 'react';
import { ToastContext } from '../components/ToastContext';

export const useToast = () => {
  return useContext(ToastContext);
};
  • useContext(ToastContext): This line uses the useContext hook to access the data provided by the ToastContext. The ToastContext is created in ToastContext.js.
  • export const useToast: This line exports the useToast hook so that it can be used in other components.

This custom hook provides a simple way for any component to access the addToast and removeToast functions without having to pass them down as props.

src/components/ToastContext.js

This file defines the ToastContext component, which is responsible for managing the state of the toasts.

import React, { createContext, useState, useCallback } from 'react';
import Toast from './Toast';

export const ToastContext = createContext();

let id = 0;

export const ToastContext = ({ children }) => {
  const [toasts, setToasts] = useState([]);

  const addToast = useCallback((toast) => {
    const newToast = { ...toast, id: id++ };
    setToasts((prevToasts) => {
      if (prevToasts.length >= 5) {
        const updatedToasts = prevToasts.slice(0, prevToasts.length - 1);
        return [newToast, ...updatedToasts];
      }
      return [newToast, ...prevToasts];
    });
  }, []);

  const removeToast = useCallback((id) => {
    setToasts((prevToasts) => prevToasts.filter((toast) => toast.id !== id));
  }, []);

  return (
    <ToastContext.Provider value={{ addToast, removeToast }}>
      {children}
      <div className="fixed top-28 right-10 z-50">
          {toasts.map((toast) => (
            <Toast
              key={toast.id}
              id={toast.id}
              title={toast.title}
              message={toast.message}
              duration={toast.duration}
              removeToast={removeToast}
            />
          ))}
      </div>
    </ToastContext.Provider>
  );
};
  • export const ToastContext = createContext();: This line creates a new context object.
  • const [toasts, setToasts] = useState([]);: This line uses the useState hook to create a toasts state variable, which is an array that will hold all the toast objects.
  • addToast: This function adds a new toast to the toasts array. It also ensures that there are at most 5 toasts at any given time.
  • removeToast: This function removes a toast from the toasts array.
  • <ToastContext.Provider value={{ addToast, removeToast }}>: This is the provider component. It makes the addToast and removeToast functions available to all the components in the tree below it.
  • {toasts.map((toast) => ...)}: This code maps over the toasts array and renders a Toast component for each toast object.

src/components/Toast.js

This file defines the Toast component, which is the UI for a single toast notification.

import React, { useEffect } from 'react';
import { motion } from 'framer-motion';
import { X } from '@phosphor-icons/react';

const Toast = ({ id, title, message, duration, removeToast }) => {
  useEffect(() => {
    const timer = setTimeout(() => {
      removeToast(id);
    }, duration);

    return () => {
      clearTimeout(timer);
    };
  }, [id, duration, removeToast]);

  return (
    <motion.div
      initial={{ x: '100%', opacity: 0 }}
      animate={{ x: 0, opacity: 1 }}
      exit={{ opacity: 0 }}
      transition={{ type: 'spring', stiffness: 120, damping: 20 }}
      className="text-gray-300 py-4 px-10 rounded-lg shadow-lg border backdrop-blur-sm flex items-center justify-between w-96 mb-4"
      style={{ backgroundColor: 'rgba(68, 64, 59, 0.8)', borderColor: '#5a5e64' }}
    >
      <div className="flex flex-col text-sm">
        <span>{title}</span>
        <span>{message}</span>
      </div>
      <button onClick={() => removeToast(id)} className="pr-2">
        <X size={24} weight="bold" />
      </button>
    </motion.div>
  );
};

export default Toast;
  • useEffect(() => { ... }, [id, duration, removeToast]);: This hook sets a timer to automatically remove the toast after a certain duration. The dependency array [id, duration, removeToast] ensures that the timer is only set once when the component is first rendered.
  • <motion.div ...>: This is a component from the framer-motion library that provides animations. The initial, animate, and exit props define the animations for when the toast appears and disappears.

src/App.js

This is the main component of the application. The only change here was to wrap the entire application with the ToastContext.

import React from 'react';
import { HashRouter as Router } from 'react-router-dom';
import Layout from './components/Layout';
import AnimatedRoutes from './components/AnimatedRoutes';
import { ToastContext } from './components/ToastContext';

function App() {
  return (
    <Router>
      <ToastContext>
        <Layout>
          <AnimatedRoutes />
        </Layout>
      </ToastContext>
    </Router>
  );
}

export default App;
  • <ToastContext>: By wrapping the Layout component with the ToastContext, we make the addToast and removeToast functions available to all the components in the application.

src/pages/BlogPostPage.js

This component displays a single blog post. The change here was to use the useToast hook to show a toast notification when the user copies a code block.

// ... imports
import { useToast } from '../hooks/useToast';

// ...

const CodeBlock = ({ node, inline, className, children, openModal, ...props }) => {
  const match = /language-(\w+)/.exec(className || '');
  const { addToast } = useToast();
  const handleCopy = () => {
    const textToCopy = String(children);
    if (navigator.clipboard && navigator.clipboard.writeText) {
      navigator.clipboard.writeText(textToCopy).then(() => {
        addToast({ title: 'Success', message: 'Copied to clipboard!', duration: 3000 });
      }, () => {
        addToast({ title: 'Error', message: 'Failed to copy!', duration: 3000 });
      });
    } else {
      const textArea = document.createElement('textarea');
      textArea.value = textToCopy;
      textArea.style.position = 'fixed';
      textArea.style.left = '-9999px';
      document.body.appendChild(textArea);
      textArea.focus();
      textArea.select();
      try {
        document.execCommand('copy');
        addToast({ title: 'Success', message: 'Copied to clipboard!', duration: 3000 });
      } catch (err) {
        addToast({ title: 'Error', message: 'Failed to copy!', duration: 3000 });
      }
      document.body.removeChild(textArea);
    }
  };

  // ... rest of the component
};

// ... rest of the file
  • const { addToast } = useToast();: This line uses the useToast hook to get the addToast function.
  • addToast({ title: 'Success', message: 'Copied to clipboard!', duration: 3000 });: This line calls the addToast function to show a toast notification.

This change demonstrates how the useToast hook can be used in any component to easily show toast notifications without having to worry about the implementation details.

Read more...

]]>
<![CDATA[Do I Need to Create a Lib For That?]]> https://fezcode.com/blog/do-i-need-to-create-a-lib-for-that https://fezcode.com/blog/do-i-need-to-create-a-lib-for-that Thu, 16 Oct 2025 00:00:00 GMT My Journey into Go Libraries

Creating my first Go library, go-tournament-brackets, has been a, experience... It's a project that allowed me to dive deep into Go's capabilities for building reusable and efficient code. The process of designing the data structures, handling edge cases like automatic bye calculations, and then building an interactive command-line interface on top of it was both challenging and immensely satisfying. There's a unique sense of accomplishment in seeing your code not just work, but also be easily consumable by others.

About go-tournament-brackets

go-tournament-brackets is a versatile Go library designed for generating and managing single-elimination tournament brackets. It offers two primary components:

  • A Robust Go Library: This provides a set of data models and functions that can be integrated into any Go application. It intelligently handles tournament logic, including the correct calculation of rounds, match-ups, and automatic byes for varying numbers of participants.
  • An Interactive Command-Line Interface (CLI): Built on top of the library, this CLI allows users to run a tournament from start to finish. You can input participant names, visualize the bracket in ASCII art, and interactively determine match winners until a champion is crowned.

This library aims to simplify the process of setting up and managing tournaments, whether you're integrating it into a larger application or running a quick tournament from your terminal.

Ok, but why?

The inspiration for go-tournament-brackets struck during a casual phone call with a friend, Mustafa, (he personally requested to be named directly). I was unwinding, listening to Morcheeba's "Easier Said than Done," when he posed a fun challenge: rank our favorite rappers. His idea was to create a bracket, share it, and play through it together. Simple enough, right?

Not quite. As he started looking for online bracket makers, we quickly hit a wall. Most platforms demanded sign-ups, locked away certain tournament types behind paywalls, and generally overcomplicated what should have been a straightforward, enjoyable activity. For something so simple, the hoops we had to jump through felt entirely unnecessary. That's when the idea sparked: why not build my own? A bracket maker that was free, flexible, and didn't force you into a convoluted user experience. And so, the seed for go-tournament-brackets was planted.

How did I do that?

The journey from idea to a working library began with a deep dive into the mechanics of tournament brackets. I found myself poring over Wikipedia articles, unraveling the intricacies of single-elimination formats, byes, and seeding. Once I had a solid grasp of the theoretical underpinnings, I turned to my trusty collaborator, Gemini 2.5-Pro.

My first request to Gemini was simple: "Generate the necessary Go files for a tournament bracket library." It quickly scaffolded the basic project structure, providing the initial Go files. From there, I started defining the core data structures, translating the concepts from my research into Go structs. The models.go file, which you can see here, was born out of this phase.

// Tournament is the root object that contains all data for a tournament event.
type Tournament struct {
	Name           string
	Rounds         []Round
	TournamentType TournamentType
	Options        *Options
}

With the foundational structs in place, Gemini and I embarked on implementing the core logic. This was where the real challenge and fun began. Handling the "bye" mechanism – ensuring that teams or competitors who automatically advance in the first round are correctly placed – proved to be particularly tricky. It's not as straightforward as it might seem, but with Gemini's assistance, we iterated through various approaches, and it did its best to help navigate those complexities. It was a true collaborative effort, pushing both my understanding and Gemini's capabilities to deliver a robust solution.

Getting Your Go Module Out There!

So, you've built your awesome Go package, and now you want to share it with the world (or at least your fellow developers). Here's the lowdown on how to get your module published and discoverable:

  1. Your go.mod File is Key: First things first, make sure your go.mod file has the right module path. This should usually point directly to your GitHub repo, like module github.com/fezcode/go-tournament-brackets. This is how Go knows where to find your cool new code!

  2. Tag It, You're It! (Creating a Release): Go modules love Git tags for versioning. Think of a tag as a snapshot of a specific, stable version of your code.

    • Tag your commit: Use git tag vX.Y.Z (e.g., git tag v0.1.0). We highly recommend following Semantic Versioning (SemVer) – it makes life easier for everyone!
    • Push that tag: Don't forget to push your shiny new tag to GitHub: git push origin vX.Y.Z. This is what tells the Go module proxies that a new version exists.
  3. Letting Go Proxies Know (No, You Don't "Publish" It Directly): Here's a cool part: Go module proxies (like proxy.golang.org) are pretty smart. They'll usually find your new module version automatically once you push that tag. You don't typically run a "publish" command.

    • Want to nudge it? If you're impatient (we get it!), you can explicitly ask a proxy to fetch your new version. For example, running go get github.com/fezcode/go-tournament-brackets@v0.1.0 (or go list -m ...) from any Go project will make the proxy grab it if it hasn't already. This is more about verifying discovery than publishing.
  4. Patience, Young Padawan (Waiting for pkg.go.dev): After your module is tagged and the proxies know about it, pkg.go.dev (Go's official package discovery site) will eventually list it. Just a heads-up: this isn't instant. It can take anywhere from a few minutes to a few hours for it to show up. So, grab a coffee, and it'll be there!

Read more...

]]>
<![CDATA[HashRouter vs. BrowserRouter]]> https://fezcode.com/blog/hashrouter-vs-browserrouter https://fezcode.com/blog/hashrouter-vs-browserrouter Wed, 15 Oct 2025 00:00:00 GMT HashRouter vs. BrowserRouter

When building Single-Page Applications (SPAs) with React, react-router-dom is a go-to library for handling client-side routing. It offers two primary router components: HashRouter and BrowserRouter. While both achieve the goal of navigating between different views in your application without full page reloads, they do so in fundamentally different ways, and these differences become crucial when deploying to static hosting environments like GitHub Pages.

The Core Difference: How They Handle URLs

The key to understanding HashRouter and BrowserRouter lies in how they interact with the browser's URL structure, specifically the path and fragment (or hash) parts of a URL.

A typical URL looks something like this: protocol://hostname:port/path?query#fragment

  • /path: This part of the URL is sent to the server. The server uses it to identify and serve a specific resource (e.g., an HTML file, an image, etc.).
  • #fragment: This part, also known as the hash, is never sent to the server. It's entirely handled by the client-side (the web browser).

HashRouter: The Static Hosting Friend

  • URL Structure: https://yourdomain.com/#/blog/my-post
  • How it Works:
    1. When a user navigates to a URL with a hash (e.g., #/blog/my-post), the server (in this case, GitHub Pages) only sees the part of the URL before the hash: https://yourdomain.com/.
    2. GitHub Pages, being a static file server, simply looks for and serves the index.html file located at the root of your deployment.
    3. Once index.html loads, your React application starts. HashRouter then inspects the full URL in the browser's address bar, extracts the route information from after the hash (/blog/my-post), and renders the corresponding component.
  • Why it Works on GitHub Pages: Because the server never sees the route information (e.g., /blog/my-post), it never tries to find a physical file at that path. It always serves index.html, and your client-side JavaScript handles all the routing. This makes HashRouter a very robust and straightforward choice for static hosting where you don't have server-side routing capabilities.

BrowserRouter: The Clean URL Enthusiast (with a Catch)

  • URL Structure: https://yourdomain.com/blog/my-post
  • How it Works:
    1. When a user navigates to a URL without a hash (e.g., /blog/my-post), the server receives the entire path: https://yourdomain.com/blog/my-post.
    2. GitHub Pages, as a static server, will then try to find a physical file at /blog/my-post.
  • Why it's Tricky on GitHub Pages: Since your React application is a Single-Page Application, there isn't a physical file named my-post (or blog/my-post) on the server. All your application's code is bundled into index.html and its associated JavaScript files. Consequently, GitHub Pages returns a 404 "Not Found" error because it can't find a file at the requested path. Your index.html is never served, and your React app never gets a chance to load and handle the routing.

The 404.html Workaround for BrowserRouter (and Why It's Often More Trouble)

To make BrowserRouter work on GitHub Pages, a common workaround involves creating a custom 404.html file. This file contains a JavaScript script that, when served by GitHub Pages (because a 404 occurred), attempts to:

  1. Rewrite the URL in the browser's history to the original requested path (e.g., /blog/my-post).
  2. Redirect the browser to your index.html.

The idea is that once index.html loads, BrowserRouter will see the rewritten URL and render the correct component. However, this approach is often fraught with issues:

  • Timing and Browser Behavior: The script's execution, history.replaceState, and the subsequent redirect can be sensitive to browser behavior, caching, and network timing.
  • Flickering/Double Redirects: Users might experience a brief flicker of the 404 page or multiple redirects before the correct content loads.
  • Debugging Complexity: Debugging issues in this setup can be challenging due to the asynchronous nature of the redirects and the interaction between the server's 404 handling and client-side JavaScript.

Conclusion: Choose Wisely for Static Hosting

For projects deployed on static hosting services like GitHub Pages, HashRouter offers a simpler, more reliable, and less problematic solution. While BrowserRouter provides aesthetically cleaner URLs, the effort required to make it work consistently on static hosts often outweighs the benefits, especially for personal projects or portfolios where server-side configuration is not an option. If clean URLs are an absolute requirement, a more robust hosting solution with server-side routing capabilities (like Netlify, Vercel, or a custom server) would be a better fit.

Read more...

]]>
<![CDATA[About Fezcodex]]> https://fezcode.com/blog/about-fezcodex https://fezcode.com/blog/about-fezcodex Tue, 14 Oct 2025 00:00:00 GMT My Warning Post (and what is this site about)

Welcome to my little corner of the internet! This isn't just a website; it's my personal playground, a digital diary where I ramble about pretty much anything that catches my fancy. I'm A. Samil Bulbul, a senior software engineer by trade, but here, I'm just me - sharing thoughts, creations, and obsessions.

You'll find a mixed bag here, from deep dives into the fascinating world of computer science and engineering (think React, Go, JavaScript, and Linux wizardry) to my unfiltered rants about life (not really), the universe, and everything in between. But it's not all serious code and existential crises! I'm also a huge fan of escaping into other worlds, so expect plenty of reviews and musings on the latest (or classic) video games that have stolen my hours (Vampire: The Masquerade - Bloodlines 2, anyone?). And when I'm not gaming, I'm probably lost in a good book, binging a captivating series like "The Bear", or grooving to some timeless tunes (De La Soul always on rotation).

This space is entirely my own, a place where my ideas, opinions, and sometimes questionable tastes run wild. If you're looking for polished, perfectly curated content, you might find some of that, but mostly, you'll find genuine, unadulterated me. Dive in, explore, and enjoy the ride!

Read more...

]]>
<![CDATA[Gemini 2.5 Flash Explains me How Image Modal Works]]> https://fezcode.com/blog/gemini-explains-how-image-modal-works https://fezcode.com/blog/gemini-explains-how-image-modal-works Fri, 05 Jan 2024 00:00:00 GMT Gemini Explains How Image Modal Works

Ok, this is a first for me. I asked, as a backend developer who knows nothing about React, Gemini to explain the ImageModal.js component. I gave the following prompt:

can you explain to me everything done. Explain to me every word if necessary. I don't know anything about react so write it to text file called explanation.md. Write everything step by step to that file. As I said I don't know what useEffect is or what [src] does so explain everything in ImageModal.js and LogDetailPage.js files.

And I believe it gave me a great starting point. Where is the result. Enjoy.


Explanation of Changes: Adding an Image Modal

This document explains the changes made to implement an image modal feature, allowing users to click on images within log entries to view them in full size with animations. This explanation is tailored for someone unfamiliar with React.js.


Core Concepts Explained

Before diving into the file-specific changes, let's understand some fundamental React concepts used:

  • Components: In React, a "component" is a self-contained, reusable piece of code that represents a part of your user interface. Think of it like a building block. For example, a button, a navigation bar, or an entire page can be a component. We created a new component called ImageModal.
  • Props (Properties): Props are how you pass data from a parent component to a child component. They are like arguments to a function. For example, our ImageModal component receives src (the image source) and onClose (a function to close the modal) as props.
  • State: State is data that a component manages internally and can change over time. When a component's state changes, React automatically re-renders that component to reflect the new data. We use useState to manage state.
  • Hooks: Hooks are special functions in React that let you "hook into" React features like state and lifecycle methods from functional components (components written as JavaScript functions).
    • useState: This hook lets you add React state to functional components. It returns a pair: the current state value and a function that lets you update it. For example, const [modalImageSrc, setModalImageSrc] = useState(null); means modalImageSrc holds the current image source for the modal (initially null, meaning no image is open), and setModalImageSrc is the function you call to change it.
    • useEffect: This hook lets you perform "side effects" in functional components. Side effects are things like data fetching, subscriptions, or manually changing the DOM (Document Object Model, which is the structure of your web page).
      • useEffect(() => { /* code */ }, [dependencies]): The first argument is a function containing your effect logic. The second argument, [dependencies], is an array of values that the effect depends on. If any value in this array changes between renders, the effect function will run again. If the array is empty ([]), the effect runs only once after the initial render (like componentDidMount in class components). If you omit the array entirely, the effect runs after every render.
      • [src] in useEffect: In our ImageModal.js, useEffect(() => { ... }, [src]); means that the code inside this useEffect will run whenever the src prop changes. This is crucial for controlling the body's scroll behavior: when src becomes available (modal opens), we hide the scrollbar; when src becomes null (modal closes), we restore it. The return () => { ... }; part is a "cleanup" function that runs when the component unmounts or before the effect runs again, ensuring we always restore the scrollbar.
  • framer-motion: This is a popular animation library for React. It provides components like motion.div and AnimatePresence to easily add animations.
    • motion.div: A special div component from framer-motion that accepts animation props like initial (starting state), animate (ending state), exit (state when component is removed), and transition (how long the animation takes).
    • AnimatePresence: A component from framer-motion that enables components to animate when they are removed from the React tree (e.g., when src becomes null and the modal disappears). Without AnimatePresence, exit animations wouldn't work.
  • ReactMarkdown: A library used to render Markdown text (like the content of your log files) as HTML in a React application.
    • components prop: This prop allows you to override how ReactMarkdown renders specific HTML elements. For example, we tell it to use our custom ImageRenderer component whenever it encounters an <img> tag.

src/components/ImageModal.js Explained

This file defines the ImageModal component, which is responsible for displaying a full-size image in an overlay and handling its opening/closing animations.

import React, { useEffect } from 'react'; // Import React and the useEffect hook
import { X } from '@phosphor-icons/react'; // Import the 'X' icon for the close button
import { motion, AnimatePresence } from 'framer-motion'; // Import motion and AnimatePresence for animations

const ImageModal = ({ src, alt, onClose }) => { // Define the ImageModal component, receiving src, alt, and onClose as props
  useEffect(() => { // This effect runs when src changes
    if (src) { // If an image source is provided (modal is open)
      document.body.style.overflow = 'hidden'; // Prevent scrolling on the main page
    } else { // If no image source (modal is closed)
      document.body.style.overflow = 'unset'; // Restore scrolling
    }
    return () => { // This cleanup function runs when the component unmounts or before the effect re-runs
      document.body.style.overflow = 'unset'; // Ensure scrolling is restored
    };
  }, [src]); // The effect re-runs whenever the 'src' prop changes

  // The modal is only rendered if 'src' has a value (i.e., an image is to be displayed)
  return (
    <AnimatePresence> {/* AnimatePresence enables exit animations for components */}
      {src && ( // Conditionally render the modal only if 'src' exists
        <motion.div // This is the main modal overlay, animated by framer-motion
          className="fixed inset-0 bg-black bg-opacity-75 flex justify-center items-center z-50 p-4"
          onClick={onClose} // Clicking the overlay closes the modal
          initial={{ opacity: 0 }} // Initial animation state (fully transparent)
          animate={{ opacity: 1 }} // Animation to (fully opaque)
          exit={{ opacity: 0 }} // Animation when component is removed (fade out)
        >
          <motion.div // This is the container for the image and close button, also animated
            className="relative"
            onClick={e => e.stopPropagation()} // Prevent clicks on the image/button from closing the modal
            initial={{ scale: 0.8, opacity: 0 }} // Initial state (smaller and transparent)
            animate={{ scale: 1, opacity: 1 }} // Animation to (normal size, opaque)
            exit={{ scale: 0.8, opacity: 0 }} // Animation when removed (shrink and fade out)
            transition={{ duration: 0.2 }} // Animation duration
          >
            <button // The close button
              onClick={onClose} // Closes the modal when clicked
              className="absolute top-2 right-2 text-white text-2xl bg-gray-800 rounded-full p-2 hover:bg-gray-700 focus:outline-none"
            >
              <X size={24} weight="bold" /> {/* The 'X' icon */}
            </button>
            <img src={src} alt={alt} className="max-w-full max-h-[90vh] object-contain" /> {/* The actual image */}
          </motion.div>
        </motion.div>
      )}
    </AnimatePresence>
  );
};

export default ImageModal; // Export the component so it can be used in other files

Key Changes in ImageModal.js:

  1. Imports: Added motion and AnimatePresence from framer-motion.
  2. Conditional Rendering with AnimatePresence: The entire modal content is now wrapped in <AnimatePresence> and conditionally rendered using {src && (...) }. This tells framer-motion to watch for when the src prop becomes null and the modal is about to disappear, allowing the exit animations to play.
  3. motion.div for Animations:
    • The outer div (the dark overlay) is now a motion.div with initial={{ opacity: 0 }}, animate={{ opacity: 1 }}, and exit={{ opacity: 0 }} for a fade-in/fade-out effect.
    • The inner div (containing the image and close button) is also a motion.div with initial={{ scale: 0.8, opacity: 0 }}, animate={{ scale: 1, opacity: 1 }}, and exit={{ scale: 0.8, opacity: 0 }}. This creates a subtle "pop" effect where the image scales up slightly as it appears and scales down as it disappears.
    • transition={{ duration: 0.2 }} sets the animation speed to 0.2 seconds.

src/pages/LogDetailPage.js Explained

This file is responsible for displaying the detailed content of a single log entry. The changes here involve integrating the ImageModal and making images clickable.

// ... (existing imports)
import ImageModal from '../components/ImageModal'; // NEW: Import the ImageModal component

// ... (LinkRenderer component)

const LogDetailPage = () => {
  // ... (existing useParams, useState, useRef)

  // NEW: State to manage the image modal.
  // modalImageSrc will hold the URL of the image to display in the modal, or null if no modal is open.
  // setModalImageSrc is the function to update this state.
  const [modalImageSrc, setModalImageSrc] = useState(null);

  // ... (useEffect for fetching log data)

  // ... (loading and not found states)

  // NEW: Custom component to render <img> tags within ReactMarkdown.
  // This allows us to add custom behavior (like opening a modal) to images.
  const ImageRenderer = ({ src, alt }) => (
    <img
      src={src} // The source URL of the image
      alt={alt} // The alt text for accessibility
      className="cursor-pointer max-w-full h-auto" // Styling for the image, including making it look clickable
      onClick={() => setModalImageSrc(src)} // When the image is clicked, set its source to modalImageSrc, which opens the modal
    />
  );

  return (
    <div className="bg-gray-900 py-16 sm:py-24">
      <div className="mx-auto max-w-7xl px-6 lg:px-8">
        <div className="lg:grid lg:grid-cols-4 lg:gap-8">
          <div className="lg:col-span-3">
            <Link to="/logs" className="text-primary-400 hover:underline flex items-center justify-center gap-2 text-lg mb-4">
              <ArrowLeftIcon size={24} /> Back to Logs
            </Link>
            <div ref={contentRef} className="prose prose-xl prose-dark max-w-none">
              {/* MODIFIED: ReactMarkdown now uses our custom ImageRenderer for <img> tags */}
              <ReactMarkdown components={{ a: LinkRenderer, img: ImageRenderer }}>{log.body}</ReactMarkdown>
            </div>
          </div>
          <div className="hidden lg:block">
            <LogMetadata metadata={log.attributes} />
          </div>
        </div>
      </div>
      {/* NEW: The ImageModal component is rendered here. */}
      {/* It receives the image source from modalImageSrc and a function to close itself. */}
      <ImageModal src={modalImageSrc} alt="Full size image" onClose={() => setModalImageSrc(null)} />
    </div>
  );
};

export default LogDetailPage;

Key Changes in LogDetailPage.js:

  1. Import ImageModal: We added import ImageModal from '../components/ImageModal'; at the top of the file so we can use our new modal component.
  2. useState for modalImageSrc:
    • const [modalImageSrc, setModalImageSrc] = useState(null); was added. This creates a piece of state called modalImageSrc. Its initial value is null, meaning no image is currently open in the modal.
    • setModalImageSrc is the function we'll use to change modalImageSrc. When we call setModalImageSrc('path/to/image.jpg'), the modal will open with that image. When we call setModalImageSrc(null), the modal will close.
  3. ImageRenderer Component:
    • This is a small, new component defined within LogDetailPage. Its job is to tell ReactMarkdown how to render <img> tags.
    • Instead of just rendering a plain <img> tag, our ImageRenderer adds an onClick event handler.
    • onClick={() => setModalImageSrc(src)}: When a user clicks on an image, this function is called. It updates the modalImageSrc state with the src (source URL) of the clicked image. This change in state then triggers the ImageModal to appear.
  4. ReactMarkdown components Prop Modification:
    • The line <ReactMarkdown components={{ a: LinkRenderer }}>{log.body}</ReactMarkdown> was changed to <ReactMarkdown components={{ a: LinkRenderer, img: ImageRenderer }}>{log.body}</ReactMarkdown>.
    • This tells ReactMarkdown to use our LinkRenderer for <a> (link) tags and our new ImageRenderer for <img> (image) tags. Any other HTML tags will be rendered by ReactMarkdown's default behavior.
  5. Rendering ImageModal:
    • <ImageModal src={modalImageSrc} alt="Full size image" onClose={() => setModalImageSrc(null)} /> was added at the end of the LogDetailPage's return statement.
    • src={modalImageSrc}: This passes the current value of our modalImageSrc state to the ImageModal. If modalImageSrc is null, the ImageModal won't render (or will animate out if it was previously open). If it has an image URL, the modal will display that image.
    • onClose={() => setModalImageSrc(null)}: This passes a function to the ImageModal. When the ImageModal's close button is clicked, or the overlay is clicked, it calls this onClose function, which in turn sets modalImageSrc back to null, causing the modal to close.

Step-by-Step Summary of Actions

  1. Created src/components/ImageModal.js:

    • A new file was created to house the ImageModal component.
    • This component handles displaying the full-size image, the close button, and the overlay.
    • It uses useEffect to control body scrolling when the modal is open/closed.
    • It uses framer-motion's motion.div and AnimatePresence for animated transitions (fade and scale) when opening and closing.
  2. Modified src/pages/LogDetailPage.js (First replace call):

    • Added import ImageModal from '../components/ImageModal'; to bring the new modal component into LogDetailPage.
    • Added const [modalImageSrc, setModalImageSrc] = useState(null); to manage the state of the modal (whether it's open and which image it should display).
  3. Modified src/pages/LogDetailPage.js (Second replace call):

    • Defined a new functional component ImageRenderer within LogDetailPage. This component is a custom renderer for <img> tags. It takes the src and alt of an image and renders it, but also adds an onClick handler that calls setModalImageSrc(src) to open the modal with the clicked image.
    • Updated the ReactMarkdown component's components prop to include img: ImageRenderer. This tells ReactMarkdown to use our custom ImageRenderer whenever it encounters an image in the Markdown content.
    • Added the <ImageModal> component to the LogDetailPage's render output. It receives modalImageSrc as its src prop and a function to close itself (onClose={() => setModalImageSrc(null)}).

These changes collectively enable the interactive image modal feature with smooth animations.

Read more...

]]>