blog post

Shipyard 2025: Bringing IPFS Home

TL;DR

2025 was the year IPFS became practical for regular users. Seven Kubo releases shipped alongside dozens more across Rainbow , Desktop, WebUI, Companion , and the JavaScript ecosystem including Helia and @helia/verified-fetch . The theme throughout: removing barriers that kept IPFS confined to datacenters and making it work on consumer hardware.

The headline feature is DHT Provide Sweep . Combined with AutoTLS , HTTP retrieval , and smarter network defaults, running a serious IPFS node at home is now viable.

For browsers, inbrowser.link and @helia/verified-fetch demonstrate IPFS retrieval without centralized gateways.

This post covers what shipped, why it matters, and where to find it:

Kubo

Seven major releases shipped in 2025: v0.33 , v0.34 , v0.35 , v0.36 , v0.37 , v0.38 , and v0.39 . The theme: making IPFS self-hosting viable on home networks.

Provide Sweep: Self-Hosting Finally Works

The biggest change in 2025: Kubo’s DHT provider system was rebuilt from scratch.

Run a serious IPFS node at home, small office, or a local library without killing your network. The old DHT provider created hourly traffic spikes that overwhelmed connections. If you had lots of content, your router would choke, your ISP might throttle you, and your node would fall behind on announcements. The math was brutal: you could only announce about 5,000 CIDs before content started disappearing from the DHT.

The new Sweep provider spreads network load evenly over time instead of dumping everything at once. By batching announcements to the same DHT servers, it achieves 97% fewer lookups when providing significant amounts of CIDs.

You can now handle hundreds of thousands of CIDs without memory spikes, run on residential internet, and add content that becomes findable immediately. State persists across restarts, so rebooting doesn’t mean starting over.

Kubo 0.38 introduced Sweep as experimental . 0.39 made it the default . If there is one thing to take away from this post, update to Kubo 0.39 or later .

For the technical deep-dive, see Provide Sweep: Solving the DHT Provide Bottleneck .

AutoTLS: Browsers Can Now Connect to Your Node

Browsers need Secure WebSocket (WSS) to connect to your node, and WSS requires TLS certificates that browsers can verify . AutoTLS obtains them automatically via a public good service deployed and run by Shipyard at registration.libp2p.direct . No DNS configuration, no certificate management, no renewal headaches.

Kubo 0.33 introduced opt-in support . 0.34 made it default for nodes with 1+ hour uptime (adjustable via AutoTLS.RegistrationDelay ).

For technical details, see the AutoTLS blog post , p2p-forge README , and AutoTLS configuration in Kubo .

Retrieval and Connectivity

HTTP Retrieval: Content now loads from more sources, including standard CDNs. Kubo fetches blocks over HTTPS using cryptographically verifiable ?format=raw responses, so a generic HTTP server without libp2p stack can serve content without trust assumptions. Kubo 0.35 added opt-in support . 0.36 enabled it by default alongside Bitswap. See HTTPRetrieval configuration.

Bitswap Broadcast Reduction: Uses less bandwidth and stays stable under load. Kubo now tracks which peers actually respond before broadcasting to them, cutting broadcast messages by 80-98% and bandwidth by 50-95%. Shipped in Kubo 0.36 .

AutoNATv2: Per-address reachability detection. Your node tests each address and transport separately (IPv4, IPv6, QUIC, WebSocket) and announces only working ones. AutoTLS certificates are requested only for dialable addresses. Shipped in Kubo 0.36 .

UPnP Self-Healing: Your node stays connected even when your router restarts. Previously, router reboots meant lost port mappings and manual daemon restarts. Now Kubo automatically recovers. Shipped in Kubo 0.39 .

Configuration and Maintenance

IPNS TTL: Your IPNS updates propagate faster. Default TTL dropped from 1 hour to 5 minutes, so changes become visible quickly without manual --ttl flags. Shipped in Kubo 0.34 .

AutoConf: Take control of network defaults. ["auto"] placeholders make hidden defaults explicit so you can inspect, replace, or disable them. Shipped in Kubo 0.37 .

Embedded Migrations: Upgrade offline. Repository migrations now run directly from the binary in milliseconds, no internet downloads required. Shipped in Kubo 0.37 .

MFS Stability: Write thousands of files without crashes. Fixed memory growth, CPU spikes, and deadlocks when copying many files to MFS folders. Shipped in Kubo 0.33 .

Pin Names: Name your pins for easier management. ipfs add --pin-name=mydata cat.jpg assigns a name at creation time. Shipped in Kubo 0.37 .

Logging: Debug your node in real-time. Dynamic log levels work across the entire stack including libp2p internals, thanks to a slog-to-go-log bridge . Adjust levels via ipfs log level, stream logs with ipfs log tail, or use the WebUI diagnostics screen for visual inspection. Shipped in Kubo 0.37 and 0.38 .

Pebble Datastore: Alternative storage backend for users who need something other than the default flatfs. Pebble may suit workloads with frequent writes or many small files, trading higher memory usage for improved throughput. See the datastores documentation .

RISC-V Binaries: Run IPFS on open hardware. Official linux-riscv64 prebuilt binaries now available. Shipped in Kubo 0.39 .

Infrastructure

Gateway

Gateway code lives in the Boxo Gateway Library and powers both Kubo and Rainbow . Boxo is tested against gateway-conformance to ensure compliance with HTTP Gateway specifications . If you want to build your own gateway or embed one in an application, Boxo provides the building blocks.

Resource Protection: Gateways stay responsive under load. RetrievalTimeout returns 504 after 30s with useful diagnostics, MaxConcurrentRequests returns 429 when overwhelmed. Shipped in Kubo 0.37 .

Diagnostic Error Pages: Easier debugging when content fails to load. 504 errors now show which retrieval phase failed, which providers were tried, and link to retrieval check tool. Shipped in Kubo 0.38 .

CDN Compatibility: Works better behind CDNs. MaxRangeRequestFileSize prevents bandwidth overcharges from range request limitations. Shipped in Kubo 0.39 .

WebRecorder Support: Archive websites from IPFS. Negative HTTP Range requests enable WebRecorder to load snapshots. Shipped in Kubo 0.36 .

Delegated Routing

Find content and peers without running a full DHT client/server. Someguy implements the Routing V1 HTTP API, participating in the DHT on behalf of lightweight clients and exposing results through a simple HTTP API. Browsers and mobile devices that lack resources for DHT participation can use delegated routing instead, or combine it with alternative routing systems and indexers for additional resiliency.

The public deployment at delegated-ipfs.dev is a public good delegated routing endpoint used by Kubo and Helia when the native DHT client is not enabled.

New in 2025: IPIP-476: Delegated Routing DHT Closest Peers API for finding DHT-closest peers (critical for browser nodes finding relay peers), HTTP block provider support allowing trustless gateways to act as routing backends, AutoConf support for flexible endpoint configuration, faster response times through improved caching, and IPIP-513: Delegated Routing V1 returns 200 for empty results , eliminating confusing browser console errors.

go-libp2p

Shipyard’s final go-libp2p releases shipped between January and September 2025, before transitioning maintenance to the community .

New in 2025: rate limiting with per-IP and per-subnet DoS protection plus QUIC source address verification (v0.42 ), per-address reachability via AutoNATv2 so nodes know which addresses are actually dialable (v0.42 ), error codes for stream resets and connection closes (v0.40 ), and HTTP Peer ID Auth for libp2p-native HTTP (v0.41 ).

User-Facing Tools

IPFS Desktop runs Kubo in the background with IPFS WebUI as its browser-based interface. It tracks each Kubo release, so all improvements described in this post are available without manual upgrades. New in 2025: Desktop no longer kills consumer routers and can run 24/7, files become shareable immediately after adding, nodes recover automatically after router restarts, grid view for visual file browsing, CAR import for DAGs, QR code sharing, a diagnostics screen with real-time logs and retrieval checks, and peer agent version display.

IPFS Companion is a browser extension that integrates IPFS into your web browser. New in 2025: improved privacy with no DNS queries to external services when your node is offline, hybrid polling for MV3 reliability when Chrome’s service worker goes dormant, and deduplicated concurrent DNSLink lookups.

Retrieval Check answers the question: why isn’t my content loading? check.ipfs.network shows exactly what went wrong: which providers advertised the content, whether they responded, and where the retrieval chain broke. The “Check CID retrievability” links in IPFS Desktop, WebUI, and gateway error pages all point here. New in 2025: HTTP retrievability checks (not just Bitswap), overhauled card-based UI with visual feedback, peer agent version display, IPNS and DNSLink resolution, and embeddable iframe support.

Browser Retrieval

In late 2024, we published IPFS on the Web in 2024 , laying out a roadmap for making IPFS work natively in browsers. 2025 delivered on that roadmap.

Service Worker Gateway

Load IPFS websites and files directly in your browser without trusting a centralized gateway to do routing and retrieval for you. service-worker-gateway implements the Subdomain Gateway specification in a browser service worker, fetching blocks from providers, verifying them locally, and rendering content.

The public deployment at inbrowser.link is on the Public Suffix List , so each CID gets proper Origin isolation. Visit https://<cidv1>.ipfs.inbrowser.link and retrieval happens in your browser with the same security boundaries as any other website.

In A Post Gateway World , we analyzed traffic to public gateways at ipfs.io and dweb.link. About 9% comes from browser users loading websites. These users can transition to inbrowser.link for verifiable, offline-ready access without proxying data and browsing history over centralized infrastructure.

Verified Fetch

@helia/verified-fetch is the fetch() API for IPFS content. Works like standard fetch() but for CIDs and IPFS paths. Web apps fetch content directly and verify it locally without trusting a gateway. In A Post Gateway World , this is the path forward for apps and hotlinks that currently depend on public gateways to handle routing, retrieval, and deserialization.

js-libp2p

js-libp2p v3 shipped in September 2025 as Shipyard’s final release before transitioning maintenance to the community . It reworks streams: EventTarget-based APIs replace streaming iterables, and backpressure handling prevents memory exhaustion under load. See the release blog post for technical details.

Specifications

Shipyard authored two long-overdue specifications that were merged:

  • UnixFS specification : The file and directory format used by ipfs add finally has formal documentation. This is the format that powers MFS, pinning, and most content on IPFS.
  • IPFS Kademlia DHT specification : How IPFS nodes find each other and content. Documents the Amino DHT protocol, routing tables, provider records, and all the details implementations need to interoperate.

Three IPIPs were also ratified and adopted by implementations:

  • IPIP-476 : Delegated Routing DHT Closest Peers API. Helps browser nodes find relay peers by querying for the closest peers to a given key.
  • IPIP-512 : Identity CID 128-byte limit. Prevents abuse of identity multihashes by enforcing a size limit on inline data.
  • IPIP-513 : Delegated Routing V1 returns 200 for empty results. Eliminates confusing browser console errors.

Looking Forward

2025 brought significant changes to how Shipyard operates. With renewed focus on IPFS and content addressing, the year delivered on a clear theme: making self-hosted IPFS practical on regular hardware.

That work continues. Provide Sweep handles hundreds of thousands of CIDs today; the goal is to improve the routing story to handle much more. HTTP retrieval shipped alongside Bitswap, opening the door to lightweight implementations that fetch content over plain HTTPS. inbrowser.link and @helia/verified-fetch prove that browsers can verify content without trusting a gateway, a key step toward the post-gateway world we outlined earlier this year.

The priority remains solving real problems: routing that finds content reliably, retrieval that loads it quickly, and specifications that help implementations work together. If IPFS is going to work for everyone, it needs to work on the hardware and networks people actually have.

Want to get involved? Install IPFS at docs.ipfs.tech/install , join the conversation at discuss.ipfs.tech , or reach out via email at the bottom of this page.


To dig deeper, you can find detailed changelogs at:

Related Articles

Keep Web3 Online

Our free IPFS tools and integrations have over 75 million monthly active users around the globe.

Help Fund Shipyard