Message: 1
Date: Sun, 29 Apr 2018 12:45:24 +0200 (CEST)
Subject: RE: Feedback wanted: bold headers?
Content-Type: text/plain; charset=US-ASCII; format=flowed
A possible syntax in similar vein could be to use a CURL_COLORS environment
variable (and command line option) and allow it to set a colon-separated list
server=1;35:content-length=4;32:*=1
Makes "server" headers bold purple, "content-length" headers underlined green
and the rest plain bold.
But I'm not entirely convinced people actually want or would use such a
flexible approach...
This is what I expected; the requirement never stays "just bold".
The colors supported by terminals vary - as do the colors mapped to the
color numbers.
People with visual impairments (e.g. color blindness) have issues - and
modify termcap.
So, again, if you do this, use termcap; it has solved these problems for
many years.
Don't do your own thing. It would be like me writing my own utility to
fetch http resources
because "curl is too heavyweight". Wouldn't be long before it looked a
lot like curl :-)
------------------------------ Message: 4 Date: Sun, 29 Apr 2018
text/plain; charset="utf-8"; Format="flowed" On Sun, 29 Apr 2018,
Use termcap. Or (n)curses. Don't hardcode even the ANSI bold controls.
That would be taking things (much) further than I'd be comfortable with.
Then don't do it at all. Termcap is easy to use; there are libraries to
fetch the right control
sequence for whatever terminal (as may be modified by user
capabilities). Hardcoding your
own idea of beauty and/or terminal/user capabilities is something that
you (or your users)
will regret.Â
If you mean that you don't want to do a fancy ncurses windowed gui at
this time, I'm OK
with that. Though scrolling regions for client and server headers would
be much more
useful than just bolding header names. As I noted, these days the list
of headers can
run into the dozens, and when debugging one often would like to compare
request with
response. E.g. If-Modified-Since with Last-Modified. My current
approach is to copy the
whole trace into an emacs buffer, split the window horizontally, and
scroll as needed.
Note that this discards your bolding... and that an emacs face can
easily highlight header names.
(Or vim, or whatever editor you like.)
Make sure that you don't break copy and paste.
How would I break or not break that?
A paste buffer may (depends on OS) hold multiple formats: graphics,
plain text, rich text, html, ...
If you enhance text, the terminal may store what you copy differently
(or without plain text).
Since, as noted above, today's use model is often "do something, copy
screen, diagnose", you
need to make sure that the new behavior doesn't break it.
If you default to no enhancements, you should be OK.
If you default to enhancements on a terminal, you may not. E.g. if bold
triggers storing only
rich text in the paste buffer. That's a per-terminal emulator behavior.
As a user, I don't want to be doubly surprised: why are the header names
in bold? and why
doesn't copy & paste work anymore?
2>&1 curl -v http://www.spamhaus.org/drop/dropv6.txt | \
sed -e's/^< \([A-Z}[A-Za-z-]*:\)/< \x1b[1m\1\x1b[m/' | less -R
Of course that breaks in 22 different ways when you add other factors in. But
sure, that works for the simple use case.
I didn't propose it as an engineered solution; merely to show that one
can have your
enhancement without modifying curl. A fully engineered wrapper would
be a small project. And would use termcap.
I'd find that to be a too huge penalty for just doing bold escape codes.
What's "huge" about retrieving the 'bold' and 'end bold' codes from termcap?
Inside the library, it's just a hash lookup that happens once per run of
curl. The
library and termcap file are frequently used; they are likely to be in
the filesystem
cache(s).
If you mean running tput twice - well, it's a shell script - that's the
API; shell scripts
do lots of process activations. (The second is likely to be cached.)
But compared
to the network traffic, who cares? It's a few milliseconds. If you do
the work in
curl, you'll be in the microseconds range - a small price to pay for
portability.
As another alternative, how about providing an option for curl to write
a pcap file?
e.g., it might open the connection, then spawn a tcpdump command with a
precise filter
covering just that connection.
Or call libpcap itself. Or generate a synthetic trace with the data in
plaintext.
With a capture file in hand, wireshark provides a GUI with all the fancy
colorizing options that you
might want. And so do many other network analysis tools.
Such a capture would also be useful in diagnosing transfer issues at all
levels of the stack. wireshark dissectors will link requests and
responses & provide multiple levels of detail.
Why have curl help?Â
using wireshark directly is a pain. You don't know the port numbers in
advance;
you need to deal with two different filter syntaxes ; you often end up
capturing a huge
amount of extraneous data. And you need to use multiple tools
simultaneously.
(How often have I tried to click "start" on wireshark just before "<CR>"
on an application
window? Lots.)
Plus, with any encrypted protocol, you need private keys. (Good luck
with that.)Â
curl has the cleartext, so could turn https into http (just generate a
fake TCp stream
trace). And a synthetic trace doesn't need privileges.
Yes, I know about curl's trace files - which are very useful. But a
.pcap file can be consumed
by wireshark (and other tools) - which allows enhanced presentation to
be their problem, not
curl's.
If you have time to invest, this seems to address a more general problem
and produce a much more flexible result with a bounded investment of
time. In any case, curl is your tool & you'll do what you wish. You have
the feedback that you asked for. I hope it helps.
Timothe Litt
ACM Distinguished Engineer
--------------------------
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.
Â