Show HN: FSP first algorithm that compresses tiny datasets 100 bytes efficiently

1 points by Forgret a day ago

Most compression algorithms, like ZIP or RLE, fail on very small datasets. For example, a 52-byte dataset often ends up larger with ZIP due to headers and metadata.

FSP (Find Similar Patterns) works differently: it identifies similar blocks, stores a single base block, and records only the differences.

In a test:

Original size: 52 bytes

Compressed size: 29 bytes

Compression ratio: 1.79×

Perfect decompression — no data loss

The GitHub repository contains:

README.txt — explains the algorithm with clear examples

Python test script — shows compression, decompression, and computes compression ratio

GitHub: https://github.com/Ferki-git-creator/fsp

Website(full info): https://ferki-git-creator.github.io/fsp/

FSP works on any data: text, logs, versioned files, images, or video frames. It’s simple, universal, and the first algorithm in the world that can compress datasets smaller than 100 bytes into an even smaller size.