Discussion:
Idea: master/client concept
Daniel Stenberg
2018-04-18 10:36:24 UTC
Permalink
Hey,

This concept has been discussed a bit at both curl up conferences so far, so I
decided it was about time to start jotting down some thoughts on how it could
work:

https://github.com/curl/curl/wiki/curl-tool-master-client

It's a wiki. Feel free to add your ideas/thoughts.
--
/ daniel.haxx.se
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://curl.h
Richard Gray
2018-04-18 13:46:34 UTC
Permalink
Post by Daniel Stenberg
Hey,
This concept has been discussed a bit at both curl up conferences so far, so I
decided it was about time to start jotting down some thoughts on how it could
https://github.com/curl/curl/wiki/curl-tool-master-client
It's a wiki. Feel free to add your ideas/thoughts.
Since it is along the same sort of line (keeping curl running with cached
connections, etc. between requests), I'll point at a suggestion I made back in
June of 2014 which is sort of the inverse of the master suggestion:

Scripting curl from the inside out (Re: curl the next few years)
https://curl.haxx.se/mail/archive-2014-06/0022.html

It proposes adding a --exec <program> option which would specify a program
(script) to be invoked after a transfer to analyze the result of the just
completed transfer and optionally start a new one.

Obviously, the m-c concept is more flexible and straight forward to use.
Suggestions (sorry no time to mess with wiki):

- Should be possible to just tell curl to use a master, start one if not
already done:
curl --master <transfer>

- Master should have an inactivity timeout?
curl --master-min 7 # start if needed, 7 min timeout
curl --master-min 10 <transfer> # start if needed, 10 min timeout
curl --master-sec 300 <transfer> # like above, better for testing!

- Should be an explicit stop command?
curl --master-stop # stops a running master
curl --master-stop <transfer> # stops master after transfer completes

- can master be made to stop when the shell that launched it exits?

- curlrc should be able to transparently cause the master to be used?

Rich

-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: h
Daniel Stenberg
2018-04-19 07:18:05 UTC
Permalink
Post by Richard Gray
Scripting curl from the inside out (Re: curl the next few years)
https://curl.haxx.se/mail/archive-2014-06/0022.html
It proposes adding a --exec <program> option which would specify a program
(script) to be invoked after a transfer to analyze the result of the just
completed transfer and optionally start a new one.
Obviously, the m-c concept is more flexible and straight forward to use.
I think m-c has a benefit that existing scripts could be adjusted to take
advantage of it with very little extra editing. That's one of the primary
beauties of it. Another being that we don't have to create much new syntax or
anything. It'll basically be like using curl normally, but with some added
super powers like easier use of persistent connections etc.

I used the rest of your ideas and expanded the wiki page:

https://github.com/curl/curl/wiki/curl-tool-master-client

Thanks!
--
/ daniel.haxx.se
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Xavier !! <M X >
2018-04-19 09:09:24 UTC
Permalink
Hi Daniel, curl-users,

Really happy to see a move in that direction : Thanks a lot !!

What about keeping most of the way with use curl today intact, and leverage the use of "-K" (read from file/stdin/named pipe) in conjonction with "--next" ?

Instead of reading the whole configuration from file, stdin or named pipe before starting transfers, curl would parse options up to the "--next", pause the config file reading, fire off the transfer, then continue parsing where it left off

When started with "-K", curl would never quit until it receive an "EOF"

That way, an application could "remote-control" a copy of curl through a pipe and send it a set of URLs computed dynamically

As usual, curl would keep its connection open for reduced latency on subsequent transfers, and use the standard connection pool that holds N connections alive after use, in case they're reused in a subsequent request

(?? Each connection could be used as the basis for parallel transfers ??)

That would also speed up transfers when curl is used at the end of a shell pipeline, where a large list of URLs is computed and passed in

Some "--next" alternative could be used for that explicit functionality from the start to make it less likely to rock any boats. Something like: "--flush" or "--run"

Curl could even be extended to listen on a socket or tcp port, reading the config file from that socket or TCP port instead of a pipe

(Most of the following proposal comes from an exchange on curl-users mailing list from Dec 15 2015 : 'curl waits for stdin to "EOF" before firing requests when "-K -" is used to read config from stdin')

Don't know a proper way for that proposal in the wiki : Using the mailing list
Sorry about that

Regards,

Xavier
Dan Fandrich
2018-04-19 10:14:34 UTC
Permalink
Post by Xavier !! <M X >
What about keeping most of the way with use curl today intact, and leverage the
use of "-K" (read from file/stdin/named pipe) in conjonction with "--next" ?
This is my preference as well, as it makes the fewest changes to how curl
already works. We discussed it during an unconference session at last year's
curl://up and it seemed like it would solve the use cases people were coming up
with for such a feature.
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://cur
Daniel Stenberg
2018-04-19 14:06:54 UTC
Permalink
Post by Dan Fandrich
Post by Xavier !! <M X >
What about keeping most of the way with use curl today intact, and leverage the
use of "-K" (read from file/stdin/named pipe) in conjonction with "--next" ?
This is my preference as well, as it makes the fewest changes to how curl
already works. We discussed it during an unconference session at last year's
curl://up and it seemed like it would solve the use cases people were coming
up with for such a feature.
Yeps, making curl able to keep getting new instructions -K style from a
file/pipe while it is already transferring files is certainly a good first
step.

That will also (basically) require that we switch to using the multi interface
internally. Basically every one of these new fancy features we can think of
requires that. Quite clearly that's a starting point...
--
/ daniel.haxx.se
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://curl.h
Dan Fandrich
2018-04-19 14:56:56 UTC
Permalink
Post by Daniel Stenberg
Post by Dan Fandrich
Post by Xavier !! <M X >
What about keeping most of the way with use curl today intact, and leverage the
use of "-K" (read from file/stdin/named pipe) in conjonction with "--next" ?
This is my preference as well, as it makes the fewest changes to how curl
already works. We discussed it during an unconference session at last
year's curl://up and it seemed like it would solve the use cases people
were coming up with for such a feature.
Yeps, making curl able to keep getting new instructions -K style from a
file/pipe while it is already transferring files is certainly a good first
step.
That will also (basically) require that we switch to using the multi
interface internally. Basically every one of these new fancy features we can
think of requires that. Quite clearly that's a starting point...
I don't think that's actually a prerequisite. The transfers could be done
sequentially, just as they are done now with --next. But we should keep in mind
how parallel transfers would work in the future using such a mechanism.
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://curl.
Daniel Stenberg
2018-04-19 15:03:39 UTC
Permalink
Post by Dan Fandrich
Post by Daniel Stenberg
That will also (basically) require that we switch to using the multi
interface internally.
I don't think that's actually a prerequisite. The transfers could be done
sequentially, just as they are done now with --next.
Yes, it is certainly possible but I was also thinking of the pipe or stdin
that won't be read until between each transfer (unless we want to use a
dedicated thread for it) if we use the easy interface and then we might run
into issues with the pipe running full before we read it.

When using the multi interface, we would just add the pipe's fd to the mix and
read from there as soon as it signals there's something to read...
--
/ daniel.haxx.se
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiqu
Loading...