Patchwork [stable] worker: avoid potential partial write of pickled data

login
register
mail settings
Submitter Manuel Jacob
Date May 22, 2022, 2:34 a.m.
Message ID <beebf9c4b8ed6257c8f8.1653186898@tmp>
Download mbox | patch
Permalink /patch/50987/
State New
Headers show

Comments

Manuel Jacob - May 22, 2022, 2:34 a.m.
# HG changeset patch
# User Manuel Jacob <me@manueljacob.de>
# Date 1653184234 -7200
#      Sun May 22 03:50:34 2022 +0200
# Branch stable
# Node ID beebf9c4b8ed6257c8f8bfeb5e9fcae6f54268d7
# Parent  477b5145e1a02715f846ce017b460858a58e03b1
# EXP-Topic worker-pickle-fix_partial_write
worker: avoid potential partial write of pickled data

Previously, the code wrote the pickled data using os.write(). However,
os.write() can write less bytes than passed to it. To trigger the problem, the
pickled data had to be larger than 2147479552 bytes on my system.

Instead, open a file object and pass it to pickle.dump(). This also has the
advantage that it doesn’t buffer the whole pickled data in memory.

Note that the opened file must be buffered because pickle doesn’t support
unbuffered streams because unbuffered streams’ write() method might write less
bytes than passed to it (like os.write()) but pickle.dump() relies on that all
bytes are written (see https://github.com/python/cpython/issues/93050).
Yuya Nishihara - May 22, 2022, 3:44 a.m.
On Sun, 22 May 2022 04:34:58 +0200, Manuel Jacob wrote:
> # HG changeset patch
> # User Manuel Jacob <me@manueljacob.de>
> # Date 1653184234 -7200
> #      Sun May 22 03:50:34 2022 +0200
> # Branch stable
> # Node ID beebf9c4b8ed6257c8f8bfeb5e9fcae6f54268d7
> # Parent  477b5145e1a02715f846ce017b460858a58e03b1
> # EXP-Topic worker-pickle-fix_partial_write
> worker: avoid potential partial write of pickled data
> 
> Previously, the code wrote the pickled data using os.write(). However,
> os.write() can write less bytes than passed to it. To trigger the problem, the
> pickled data had to be larger than 2147479552 bytes on my system.
> 
> Instead, open a file object and pass it to pickle.dump(). This also has the
> advantage that it doesn’t buffer the whole pickled data in memory.
> 
> Note that the opened file must be buffered because pickle doesn’t support
> unbuffered streams because unbuffered streams’ write() method might write less
> bytes than passed to it (like os.write()) but pickle.dump() relies on that all
> bytes are written (see https://github.com/python/cpython/issues/93050).
> 
> diff --git a/mercurial/worker.py b/mercurial/worker.py
> --- a/mercurial/worker.py
> +++ b/mercurial/worker.py
> @@ -255,8 +255,10 @@
>                          os.close(r)
>                          os.close(w)
>                      os.close(rfd)
> +                    wf = os.fdopen(wfd, 'wb')
>                      for result in func(*(staticargs + (pargs,))):
> -                        os.write(wfd, util.pickle.dumps(result))
> +                        util.pickle.dump(result, wf)
> +                        wf.flush()

It's probably better to write "with os.fdopen(wfd, 'wb') as wf:" to clarify
that the wf and wfd are closed there.
Manuel Jacob - May 22, 2022, 5:19 a.m.
On 22/05/2022 05.44, Yuya Nishihara wrote:
> On Sun, 22 May 2022 04:34:58 +0200, Manuel Jacob wrote:
>> # HG changeset patch
>> # User Manuel Jacob <me@manueljacob.de>
>> # Date 1653184234 -7200
>> #      Sun May 22 03:50:34 2022 +0200
>> # Branch stable
>> # Node ID beebf9c4b8ed6257c8f8bfeb5e9fcae6f54268d7
>> # Parent  477b5145e1a02715f846ce017b460858a58e03b1
>> # EXP-Topic worker-pickle-fix_partial_write
>> worker: avoid potential partial write of pickled data
>>
>> Previously, the code wrote the pickled data using os.write(). However,
>> os.write() can write less bytes than passed to it. To trigger the problem, the
>> pickled data had to be larger than 2147479552 bytes on my system.
>>
>> Instead, open a file object and pass it to pickle.dump(). This also has the
>> advantage that it doesn’t buffer the whole pickled data in memory.
>>
>> Note that the opened file must be buffered because pickle doesn’t support
>> unbuffered streams because unbuffered streams’ write() method might write less
>> bytes than passed to it (like os.write()) but pickle.dump() relies on that all
>> bytes are written (see https://github.com/python/cpython/issues/93050).
>>
>> diff --git a/mercurial/worker.py b/mercurial/worker.py
>> --- a/mercurial/worker.py
>> +++ b/mercurial/worker.py
>> @@ -255,8 +255,10 @@
>>                           os.close(r)
>>                           os.close(w)
>>                       os.close(rfd)
>> +                    wf = os.fdopen(wfd, 'wb')
>>                       for result in func(*(staticargs + (pargs,))):
>> -                        os.write(wfd, util.pickle.dumps(result))
>> +                        util.pickle.dump(result, wf)
>> +                        wf.flush()
> 
> It's probably better to write "with os.fdopen(wfd, 'wb') as wf:" to clarify
> that the wf and wfd are closed there.

wfd was not explicitly closed in the child process before. But I agree 
that this would be a good idea.

Patch

diff --git a/mercurial/worker.py b/mercurial/worker.py
--- a/mercurial/worker.py
+++ b/mercurial/worker.py
@@ -255,8 +255,10 @@ 
                         os.close(r)
                         os.close(w)
                     os.close(rfd)
+                    wf = os.fdopen(wfd, 'wb')
                     for result in func(*(staticargs + (pargs,))):
-                        os.write(wfd, util.pickle.dumps(result))
+                        util.pickle.dump(result, wf)
+                        wf.flush()
                     return 0
 
                 ret = scmutil.callcatch(ui, workerfunc)