Strictly speaking, one couldn't do it properly, rendering the pages as actual text for TUIs rather than graphically for GUIs, until Unicode version 13 came along, which included the necessary block graphics characters and which was only 5 years ago.
And even then one needs modern fonts like Viznut's Unscii or GNU Unifont or which cover the necessary code points (or one of the terminal emulators that algorithmically constructs block and line characters, and has been updated for Unicode 13).
Even if it couldn't be perfectly replicated, I'm sure it could've been done in some way before - after all, it's almost 20 years since someone set up a telnet service which broadcast football World Cup games as converted ascii "video" generated live from the TV broadcasts! https://www.freshandnew.org/2006/06/watch-the-world-cup-in-a...
(And I actually remember it being surprisingly watchable, you could follow what was happening in the game even though you couldn't judge stuff like players' ball control or anything like that.)
Of course I have. It's nothing impressive and far from a 100% clone of the CEEFAX page. But its a start if someone wanted to take it further. I was more interested in trying out ratatui with Gemini.
You need some U.K.-specific knowledge, which is that CEEFAX went off air in 2012. If you see a screenshot of genuine CEEFAX (not one of the several modern things that pretend to be teletext) it will be from before 2012, possibly from long before as it was a service embedded in analogue PAL broadcasts that was capturable as page text (with all of the control characters) by BBC Micro users (who had bought the Acorn "Teletext Adapter") as long ago as the early 1980s.
This is super fun. I love it. I feel like weather data is both free and openly available (NWS) but any random search I do for the info is embedded in dozens of ads actively trying to slow me down before I can get to the exact search I'm after. Using this precludes all that. Gorgeous.
(Bug report - It shows me a full weather forecast even if it doesn't know where I am!)
[snip]
$ curl wttr.in
Weather report: not found
(then shows pretty forecasts anyway)
[/snip]
Edit: Is there a way to show Fahrenheit instead of Celsius? I don't see it in the options https://wttr.in/:help. OH. "u".
I really like the idea, but the data quality for my city (Trondheim, Norway) was unfortunately too off for me to use.
The national forecast service (yr.no) is saying it will be sunny and very hot all through the weekend, while wttr reports it will be 16-19 degrees Celcius and rain on saturday.
Years ago I was recommended yr.no for weather forecasts, and I visit it often.
I wonder what's special about Norway's meteorologists that they have exceptionally good quality data and ability to build and run a useful public service.
+1 for yrno, I'm a long time user. Their prediction got a bit worse over the last 2-3 years (like error variance is now larger), but anecdotally other providers' predictions got even more bad
It does not seem to support ASCII, nor VT100 line drawing. Also, I tried before setting Accept and Accept-Charset headers and others and they do not seem to work.
Unfortunately 3-letter airport codes don't work as advertised, because for many airport codes there are actual cities with the same 3-letter name and those take precedence in their lookup.
Multiple GitHub issues around this have been opened already.
curl wttr.in/London > london.txt
open -a TextEdit london.txt
Witness the control code garbage.
IMHO you should not emit ANSII escape sequences until you at least call isatty, preferably also consult termcap. But also IMHO we should bury the terminals, and build better REPLs and lightweight GUI toolkits.
> IMHO you should not emit ANSII escape sequences until you at least call isatty, preferably also consult termcap.
How exactly do you propose that wttr.in, which is not actually a process running on your machine (but a remote server), call isatty() on your machine?
Or are you suggesting that curl should check isatty() and strip out the control codes? But that would be overstepping curl's responsibilities, wouldn't it? Its job is to faithfully output the response, garbage or not.
> How exactly do you propose that wttr.in, which is not actually a process running on your machine (but a remote server), call isatty() on your machine?
This is not plaintext, this is ANSII garbage. If you're outputting HTML, you set the content type to text/html, so the client can interpret it. But the lack of an associated content type is not the problem, it's the blind assumptions about the capabilities of the client.
Thanks for clarifying. You're right! The output isn't actually text/plain. As someone who values standards, it is annoying to see control-code garbage when the content type claims to be text/plain. But wttr.in seems more like a fun novelty than a serious service and I suspect they don't pay much attention to standards. Still, I'm not sure that excuses saying one thing in the headers and delivering something else in the body.
Agreed, but but what would the right content type even be? Afaik there's no `text/tty` or `text/with-control-characters` etc. On the other hand, using the generic `application/octet-stream` seems unnecessarily vague?
- curl sees that the standard output is a tty, consults $TERM, termcap, etc
- curl crafts an "Accept:" header, format to be specified
- server sees Accept and responds with appropriately encoded response; e.g. for text/plain it would just output preformatted text
As this is currently NOT a common use case (mostly fun toys, biggest use case is Github printing out a pride flag), the exact content type can be easily iterated on to standardise it.
For example, the most common cases (TERM=xterm or xterm-256color) could be specified in the standard and treated as abbreviations for the complete description of capabilities. The server can have those common cases built-in, but it should also be free to ignore any capabilities it doesn't understand and send out a conservative response. All of these smarts could be a part of a library or framework.
I made this up on the spot, it's not hard, because the entire stack is adequately layered. So just don't break those layers, m'kay?
I love this one, it's excellent at packing lots and lots of information in very little space.
It's sadly victim of its success and is quite often over quota to its weather API. We should make a paid version that wouldn't have this problem and bring some monetary karma to Igor
> As of the end of June 2025, wttr.in handles 20-25 million queries per day from 150,000 to 175,000 users, according to the access logs.
No wonder! That works out at about 133-143 requests per user per day. Presumably due to scripts refreshing their data 24/7.
Another solution is just to host it yourself, given the code is open source. No quota worries, and you can always donate to Igor if you feel so inclined (assuming he wants/accepts donations).
Worth pointing out, maybe, that there is an emacs package, too - more than one, actually, the one I am using (occasionally, at least) is https://github.com/cjennings/emacs-wttrin which is available from melpa.
Kinda neat. One UX gotcha that I spot right away. I'm polling weather for my area (UTC+3) and it gives me some night time values even though it's noon. I'm thinking timezones?
This is cool, but it seems to give different results for my city depending on whether I use the normal view or the v2 or ?format views. The current weather is closer to the normal view
Interesting idea. Surely one could write a weather command that would just forward $@ to a LLM to make a structured request. On the other hand, this doesn't seem useful enough to justify the needed compute.
To be an example of some free-form written request without any special format. Parsing that input seems like a reasonable job for an LLM, right? Otherwise we will have the typical adventure game problem of “use thumb on button” doesn’t work because it expected “finger,” or whatever.
Exactly, this is what I meant. No matter how much or for which reasons one might dislike LLMs, you can't deny that they are the best general NLP tools we have right now.
I’m quite confused as to how you could have possibly been misunderstood, and kinda wonder if it is just some folks who wanted to find an interpretation that makes you wrong.
Yes, absolutely. I certainly don't need an LLM to do something like that.
When I ask for the weather, I want to know exactly what the Met Office says the weather is. Not what an LLM guesses the Met Office might have said, with a non-zero chance of introducing a hallucination.
This habit of inserting LLMs into otherwise deterministic tasks is frustrating. Why take something that can be solved accurately and deterministically (like parsing the Met Office's data) and make it probabilistic, error-prone, and unpredictable with an LLM?
LLMs are appropriate for problems we cannot solve deterministically and accurately. They are not appropriate for problems we can already solve deterministically and accurately.
I didn't assume either that the LLM is to guess the weather. I said that using LLM for parsing the Met Office's data is maybe not such a good idea if you can do it deterministically.
Printing arbitrary output to most terminal emulators is some security risk (even if pretty much everyone does it). Many suffer from vulnerabilities, both past and present, that can allow specially crafted text to inject commands back into the shell. The issue lies in the complex and often legacy standards for handling control characters and escape sequences.
Even xterm is not entirely immune to these problems and has had security advisories issued in the past.
While this attack surface has received attention from sec-researchers in the past, it's not remotely comparable to the scrutiny applied to web browsers. The ecosystem around terminals generally lacks the massive, continuously-funded bug bounty programs and large-scale, constant fuzzing that browsers are subjected to.
Yeah. I use the one in a neighboring country because they provide decent JSON and image API:s, unlike my local weather service.
In one use case I take 'https://api.met.no/weatherapi/locationforecast/2.0/compact?l...' and push through a jq incantation to format the prognosis for the coming five hours into a neat packaging for terminal viewing, then put that in a watch -n on five minutes. I'm not really interested in the escape sequences and ASCII art.
Ah, the age of the terminal is still very much well and truly with us. If only the teenager me, clutching my vt100 back in 1988 as it was being removed to be replaced with 'a modern computer interface', would've known not to fret so much and just let the future have its way ..
The very awesome awesome-console-services has more neat tools like this:
This is great, but you can vibe this and have your own custom version hitting free weather services and getting the specific info you want without making global calls to a service that might not stick around. Also, when you make calls from a terminal, it could expose your server as one they might want to try to attack, because you might have access that they want and might be gullible enough to use random services, so your security might not be great. Even if the developer is well-intentioned, the person that takes over their domain later might not be. Curl has vulnerabilities, though.
This is getting ridiculous. To propose accepting any random LLM suggestion for a random endpoint would be more trustworthy and reliable than a service which has been developed, trusted, and working for a decade… Equally nonsensical are the server exposure claims.
These comments are getting absurd, and are worryingly coming more and more from new accounts. Are you yourself a bot designed to spam communities and hype coding with LLMs?
That’s not what they’re saying. They’re saying that you could write code to do this and not be visiting some random page that targets you or potentially exploits vulnerabilities in curl.
"You can vibe this" means getting an LLM to create it, not writing the code yourself. That's what people mean when they use the term "vibe coding" (or just "vibing" in their comment).
Though vibe coding doesn't prohibit the human from making the decision on which weather API to use, so of all the criticisms to make about LLM use I don't actually agree with the person you replied to who suggested it has to mean "accepting any random LLM suggestion for a random endpoint".
This is pretty rad.
I'm surprised no one's made a CEEFAX replica for the terminal yet [0]. Their weather page is pretty iconic [1].
[0] There are CEEFAX Emulators online that pull from the BBC RSS feeds to do this.
[1] https://teletextart.co.uk/wp-content/uploads/2016/05/weather...
Strictly speaking, one couldn't do it properly, rendering the pages as actual text for TUIs rather than graphically for GUIs, until Unicode version 13 came along, which included the necessary block graphics characters and which was only 5 years ago.
And even then one needs modern fonts like Viznut's Unscii or GNU Unifont or which cover the necessary code points (or one of the terminal emulators that algorithmically constructs block and line characters, and has been updated for Unicode 13).
* https://github.com/jdebp/unscii/blob/2.1.1f/src/grids.txt#L4...
* https://github.com/jdebp/unscii/blob/2.1.1f/src/grids.txt#L9...
Even if it couldn't be perfectly replicated, I'm sure it could've been done in some way before - after all, it's almost 20 years since someone set up a telnet service which broadcast football World Cup games as converted ascii "video" generated live from the TV broadcasts! https://www.freshandnew.org/2006/06/watch-the-world-cup-in-a...
(And I actually remember it being surprisingly watchable, you could follow what was happening in the game even though you couldn't judge stuff like players' ball control or anything like that.)
I use Brandy perfectly fine by spawning a Mode7 (Teletext like) script to browse servers/BBS'...
https://github.com/stardot/MatrixBrandy
There is kind of one now https://github.com/shift/ceefax-weather :D
What took you so long?!
Have you actually run it?
It would be strange if they used AI to create it, published on GitHub, and shared on HN, but didn't bother running it once...
Of course I have. It's nothing impressive and far from a 100% clone of the CEEFAX page. But its a start if someone wanted to take it further. I was more interested in trying out ratatui with Gemini.
That is pretty. Can you link? Took me a moment to realise it wasnth July 20th yet. Can't imagine the weather was like that 9 years ago!
You need some U.K.-specific knowledge, which is that CEEFAX went off air in 2012. If you see a screenshot of genuine CEEFAX (not one of the several modern things that pretend to be teletext) it will be from before 2012, possibly from long before as it was a service embedded in analogue PAL broadcasts that was capturable as page text (with all of the control characters) by BBC Micro users (who had bought the Acorn "Teletext Adapter") as long ago as the early 1980s.
Bummer, thanks for the reply!
July 20, 2016 was a Wednesday and the screencap shows Friday. First 20 July Friday before 2016 is Friday 20 July 2012.
No idea how to pull historical UK weather data to see if it matches :)
That's a niche within a niche, I know, but for those using Waybar (https://github.com/Alexays/Waybar/), I've built wttrbar (https://github.com/bjesus/wttrbar/) - it uses Wttr.in to display a nice detailed weather widget in your bar.
Thank you! I was just thinking "how do I get this to display in Waybar", and now I don't have to spend time working on it.
EDIT: this is particularly timely because the UK Met Office has recently announced the retirement of the API I was previously using: https://www.metoffice.gov.uk/services/data/datapoint/datapoi...
Hey, I use and love this widget. Thanks for building and releasing it!
This is super fun. I love it. I feel like weather data is both free and openly available (NWS) but any random search I do for the info is embedded in dozens of ads actively trying to slow me down before I can get to the exact search I'm after. Using this precludes all that. Gorgeous.
(Bug report - It shows me a full weather forecast even if it doesn't know where I am!)
Edit: Is there a way to show Fahrenheit instead of Celsius? I don't see it in the options https://wttr.in/:help. OH. "u".I really like the idea, but the data quality for my city (Trondheim, Norway) was unfortunately too off for me to use.
The national forecast service (yr.no) is saying it will be sunny and very hot all through the weekend, while wttr reports it will be 16-19 degrees Celcius and rain on saturday.
Years ago I was recommended yr.no for weather forecasts, and I visit it often.
I wonder what's special about Norway's meteorologists that they have exceptionally good quality data and ability to build and run a useful public service.
+1 for yrno, I'm a long time user. Their prediction got a bit worse over the last 2-3 years (like error variance is now larger), but anecdotally other providers' predictions got even more bad
Yr is generally very accurate. I am from Serbia and I use it as well.
yr.no is the best for my location (northern Germany) as well.
Many locals use DWD (German Weather Service).
A lot of the German sailors use dmi.dk (Danish meteorological institute).
A lot of the Danish sailors use yr.no :)
Note the terminal -> HTML conversion used to serve wttr.in is based on https://github.com/pixelb/scripts/blob/master/scripts/ansi2h...
Weather over DNS
Wttr is an essential in my i3bar: curl -s 'https://wttr.in/Revelstoke,BC?format=4&u'
It does not seem to support ASCII, nor VT100 line drawing. Also, I tried before setting Accept and Accept-Charset headers and others and they do not seem to work.
Unfortunately 3-letter airport codes don't work as advertised, because for many airport codes there are actual cities with the same 3-letter name and those take precedence in their lookup.
Multiple GitHub issues around this have been opened already.
Otherwise pretty neat of course!
IMHO you should not emit ANSII escape sequences until you at least call isatty, preferably also consult termcap. But also IMHO we should bury the terminals, and build better REPLs and lightweight GUI toolkits.
> IMHO you should not emit ANSII escape sequences until you at least call isatty, preferably also consult termcap.
How exactly do you propose that wttr.in, which is not actually a process running on your machine (but a remote server), call isatty() on your machine?
Or are you suggesting that curl should check isatty() and strip out the control codes? But that would be overstepping curl's responsibilities, wouldn't it? Its job is to faithfully output the response, garbage or not.
> How exactly do you propose that wttr.in, which is not actually a process running on your machine (but a remote server), call isatty() on your machine?
That's exactly my point. You can't do that.
This is not plaintext, this is ANSII garbage. If you're outputting HTML, you set the content type to text/html, so the client can interpret it. But the lack of an associated content type is not the problem, it's the blind assumptions about the capabilities of the client.Thanks for clarifying. You're right! The output isn't actually text/plain. As someone who values standards, it is annoying to see control-code garbage when the content type claims to be text/plain. But wttr.in seems more like a fun novelty than a serious service and I suspect they don't pay much attention to standards. Still, I'm not sure that excuses saying one thing in the headers and delivering something else in the body.
But you've got a fair point. So thanks!
Control codes are documented in a standard for use in terminals. So not all standards are valued?
Agreed, but but what would the right content type even be? Afaik there's no `text/tty` or `text/with-control-characters` etc. On the other hand, using the generic `application/octet-stream` seems unnecessarily vague?
Here's my shot:
- curl sees that the standard output is a tty, consults $TERM, termcap, etc
- curl crafts an "Accept:" header, format to be specified
- server sees Accept and responds with appropriately encoded response; e.g. for text/plain it would just output preformatted text
As this is currently NOT a common use case (mostly fun toys, biggest use case is Github printing out a pride flag), the exact content type can be easily iterated on to standardise it.
For example, the most common cases (TERM=xterm or xterm-256color) could be specified in the standard and treated as abbreviations for the complete description of capabilities. The server can have those common cases built-in, but it should also be free to ignore any capabilities it doesn't understand and send out a conservative response. All of these smarts could be a part of a library or framework.
I made this up on the spot, it's not hard, because the entire stack is adequately layered. So just don't break those layers, m'kay?
Pipiing it to
will work.This is a web API that does not have access to your local computer.
AND you can disable the ANSI control code:
```
To force plain text, which disables colors:
$ curl wttr.in/?T
```
You can also just change the format to whatever suits you best.
I love this one, it's excellent at packing lots and lots of information in very little space.
It's sadly victim of its success and is quite often over quota to its weather API. We should make a paid version that wouldn't have this problem and bring some monetary karma to Igor
> As of the end of June 2025, wttr.in handles 20-25 million queries per day from 150,000 to 175,000 users, according to the access logs.
No wonder! That works out at about 133-143 requests per user per day. Presumably due to scripts refreshing their data 24/7.
Another solution is just to host it yourself, given the code is open source. No quota worries, and you can always donate to Igor if you feel so inclined (assuming he wants/accepts donations).
I get unreasonably angry at people in inconsiderately hammering web services. Especially for some minor operation built on nothing but love.
I really appreciate this service.
Worth pointing out, maybe, that there is an emacs package, too - more than one, actually, the one I am using (occasionally, at least) is https://github.com/cjennings/emacs-wttrin which is available from melpa.
Kinda neat. One UX gotcha that I spot right away. I'm polling weather for my area (UTC+3) and it gives me some night time values even though it's noon. I'm thinking timezones?
This is cool, but it seems to give different results for my city depending on whether I use the normal view or the v2 or ?format views. The current weather is closer to the normal view
For some reason, I was expecting a user experience like:
Interesting idea. Surely one could write a weather command that would just forward $@ to a LLM to make a structured request. On the other hand, this doesn't seem useful enough to justify the needed compute.
I do not think I would need an LLM for making something like that
I take the request:
> $ weather in san francisco, today evening?
To be an example of some free-form written request without any special format. Parsing that input seems like a reasonable job for an LLM, right? Otherwise we will have the typical adventure game problem of “use thumb on button” doesn’t work because it expected “finger,” or whatever.
Exactly, this is what I meant. No matter how much or for which reasons one might dislike LLMs, you can't deny that they are the best general NLP tools we have right now.
I’m quite confused as to how you could have possibly been misunderstood, and kinda wonder if it is just some folks who wanted to find an interpretation that makes you wrong.
Yes, absolutely. I certainly don't need an LLM to do something like that.
When I ask for the weather, I want to know exactly what the Met Office says the weather is. Not what an LLM guesses the Met Office might have said, with a non-zero chance of introducing a hallucination.
This habit of inserting LLMs into otherwise deterministic tasks is frustrating. Why take something that can be solved accurately and deterministically (like parsing the Met Office's data) and make it probabilistic, error-prone, and unpredictable with an LLM?
LLMs are appropriate for problems we cannot solve deterministically and accurately. They are not appropriate for problems we can already solve deterministically and accurately.
I'm pretty sure the idea is to use an LLM to parse the natural language into a query, not for guessing the weather.
I didn't assume either that the LLM is to guess the weather. I said that using LLM for parsing the Met Office's data is maybe not such a good idea if you can do it deterministically.
The idea was "forward $@ to a LLM to make a structured request", not to parse a structured response.
Fun fact: The fire from a strawman this size could warm a small town for many days.
Impressively snide for such a bad point.
I think that this popularity is making the site slow down dramatically. I hope all these hits won't cost too much $$
I think that this popularity is making the site slow down dramatically. I hope all these hits won't cost too much $$
I like the service but I've displayed this via curl on my home dashboard for more than 2 years - and the uptime is not great.
You could self host it which would hopefully give better uptime (while also helping reduce strain on the public service)?
Time for some FUD :)
Printing arbitrary output to most terminal emulators is some security risk (even if pretty much everyone does it). Many suffer from vulnerabilities, both past and present, that can allow specially crafted text to inject commands back into the shell. The issue lies in the complex and often legacy standards for handling control characters and escape sequences.
Even xterm is not entirely immune to these problems and has had security advisories issued in the past.
While this attack surface has received attention from sec-researchers in the past, it's not remotely comparable to the scrutiny applied to web browsers. The ecosystem around terminals generally lacks the massive, continuously-funded bug bounty programs and large-scale, constant fuzzing that browsers are subjected to.
This was a welcome find today on HN. Gave my day a bit of joy.
We need more little ANSI suns in the age of AI slop
missing the epic music - https://weatherstar.netbymatt.com/
The site is down :(
HN hugh of death
From this developer: a talk about this project, some similar of his projects, and console/textmode-web interfaces: https://media.ccc.de/v/gpn18-164-using-and-creating-console-...
It needs a compact Non-ascii Graphics form, in termux on my phone, the ASCII output is too big for the screen size.
It has a json API, if you want to spin up and customise something for your window size.
It has one. There is a ?format= URL parameter.
Used it for a while but moved over to a national weather service for better data and uptime.
Nice API though.
How do you make use of your national weather service? Do you get a similar terminal output in the end?
Yeah. I use the one in a neighboring country because they provide decent JSON and image API:s, unlike my local weather service.
In one use case I take 'https://api.met.no/weatherapi/locationforecast/2.0/compact?l...' and push through a jq incantation to format the prognosis for the coming five hours into a neat packaging for terminal viewing, then put that in a watch -n on five minutes. I'm not really interested in the escape sequences and ASCII art.
Ah, the age of the terminal is still very much well and truly with us. If only the teenager me, clutching my vt100 back in 1988 as it was being removed to be replaced with 'a modern computer interface', would've known not to fret so much and just let the future have its way ..
The very awesome awesome-console-services has more neat tools like this:
https://github.com/chubin/awesome-console-services
My favourite is:
$ nc ticker.bitcointicker.co 10080
.. which is a nice thing to check while waiting for builds ..
And then, there is this wonderful, wonderful thing:
$ curl cheat.sh
Such a great resource when all you've got is a terminal and 15 minutes waiting for those builds ..
Another great one, which I have found very useful for sending myself links across an air gap ..
$ curl qrenco.de/https://news.ycombinator.com/item\?id\=44590971
Okay, one more, because I just can't get enough:
$ curl https://api.lyrics.ovh/v1/depeche-mode/behind-the-wheel
[dead]
This is great, but you can vibe this and have your own custom version hitting free weather services and getting the specific info you want without making global calls to a service that might not stick around. Also, when you make calls from a terminal, it could expose your server as one they might want to try to attack, because you might have access that they want and might be gullible enough to use random services, so your security might not be great. Even if the developer is well-intentioned, the person that takes over their domain later might not be. Curl has vulnerabilities, though.
This is getting ridiculous. To propose accepting any random LLM suggestion for a random endpoint would be more trustworthy and reliable than a service which has been developed, trusted, and working for a decade… Equally nonsensical are the server exposure claims.
These comments are getting absurd, and are worryingly coming more and more from new accounts. Are you yourself a bot designed to spam communities and hype coding with LLMs?
That’s not what they’re saying. They’re saying that you could write code to do this and not be visiting some random page that targets you or potentially exploits vulnerabilities in curl.
"You can vibe this" means getting an LLM to create it, not writing the code yourself. That's what people mean when they use the term "vibe coding" (or just "vibing" in their comment).
Though vibe coding doesn't prohibit the human from making the decision on which weather API to use, so of all the criticisms to make about LLM use I don't actually agree with the person you replied to who suggested it has to mean "accepting any random LLM suggestion for a random endpoint".