Patchwork D7962: lfs: fix the stall and corruption issue when concurrently uploading blobs

login
register
mail settings
Submitter phabricator
Date Jan. 21, 2020, 6:38 p.m.
Message ID <differential-rev-PHID-DREV-4lbahruyiewrw7k2mows-req@mercurial-scm.org>
Download mbox | patch
Permalink /patch/44559/
State Superseded
Headers show

Comments

phabricator - Jan. 21, 2020, 6:38 p.m.
mharbison72 created this revision.
Herald added a subscriber: mercurial-devel.
Herald added a reviewer: hg-reviewers.

REVISION SUMMARY
  We've avoided the issue up to this point by gating worker usage with an
  experimental config.  See 10e62d5efa73 <https://phab.mercurial-scm.org/rHG10e62d5efa733de31f54962f1bf5e5029e20a694>, and the thread linked there for some of
  the initial diagnosis, but essentially some data was being read from the blob
  before an error occurred and `keepalive` retried, but didn't rewind the file
  pointer.  So the leading data was lost from the blob on the server, and the
  connection stalled, trying to send more data than available.
  
  In trying to recreate this, I was unable to do so uploading from Windows to
  CentOS 7.  But it reproduced every time going from CentOS 7 to another CentOS 7
  over https.
  
  I found recent fixes in the FaceBook repo to address this[1][2].  The commit
  message for the first is:
  
    The KeepAlive HTTP implementation is bugged in it's retry logic, it supports
    reading from a file pointer, but doesn't support rewinding of the seek cursor
    when it performs a retry. So it can happen that an upload fails for whatever
    reason and will then 'hang' on the retry event.
    
    The sequence of events that get triggered are:
     - Upload file A, goes OK. Keep-Alive caches connection.
     - Upload file B, fails due to (for example) failing Keep-Alive, but LFS file
       pointer has been consumed for the upload and fd has been closed.
     - Retry for file B starts, sets the Content-Length properly to the expected
       file size, but since file pointer has been consumed no data will be uploaded,
       causing the server to wait for the uploaded data until either client or
       server reaches a timeout, making it seem as our mercurial process hangs.
    
    This is just a stop-gap measure to prevent this behavior from blocking Mercurial
    (LFS has retry logic). A proper solutions need to be build on top of this
    stop-gap measure: for upload from file pointers, we should support fseek() on
    the interface. Since we expect to consume the whole file always anyways, this
    should be safe. This way we can seek back to the beginning on a retry.
  
  I ported those two patches, and it works.  But I see that `url._sendfile()` does
  a rewind on `httpsendfile` objects[3], so maybe it's better to keep this all in
  one place and avoid a second seek.  We may still want the first FaceBook patch
  as extra protection for this problem in general.  The other two uses of
  `httpsendfile` are in the wire protocol to upload bundles, and to upload
  largefiles.  Neither of these appear to use a worker, and I'm not sure why
  workers seem to trigger this, or if this could have happened without a worker.
  
  Since `httpsendfile` already has a `close()` method, that is dropped.  That
  class also explicitly says there's no `__len__` attribute, so that is removed
  too.  The override for `read()` is necessary to avoid the progressbar usage per
  file.
  
  [1] https://github.com/facebookexperimental/eden/commit/c350d6536d90c044c837abdd3675185644481469
  [2] https://github.com/facebookexperimental/eden/commit/77f0d3fd0415e81b63e317e457af9c55c46103ee
  [3] https://www.mercurial-scm.org/repo/hg/file/5.2.2/mercurial/url.py#l176

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D7962

AFFECTED FILES
  hgext/lfs/blobstore.py

CHANGE DETAILS




To: mharbison72, #hg-reviewers
Cc: mercurial-devel
phabricator - Jan. 21, 2020, 6:43 p.m.
mharbison72 added a comment.


  Even though this talks about fixing corruption, I don't *think* we need this in the next release.  We've lived with this problem for 2 years now.  But somebody who really understands how all of the URL request handling and keepalive stuff works should probably take a look before then.  I still don't know that this *can't* happen without using workers (or if it is possible in other usage contexts), so we may still want to shore things up there.

REPOSITORY
  rHG Mercurial

CHANGES SINCE LAST ACTION
  https://phab.mercurial-scm.org/D7962/new/

REVISION DETAIL
  https://phab.mercurial-scm.org/D7962

To: mharbison72, #hg-reviewers
Cc: mercurial-devel
phabricator - Feb. 5, 2020, 11:19 p.m.
marmoute added a comment.
marmoute accepted this revision.


  The patch is a bit obscur, but the explanation is strong.

REPOSITORY
  rHG Mercurial

CHANGES SINCE LAST ACTION
  https://phab.mercurial-scm.org/D7962/new/

REVISION DETAIL
  https://phab.mercurial-scm.org/D7962

To: mharbison72, #hg-reviewers, marmoute
Cc: marmoute, mercurial-devel

Patch

diff --git a/hgext/lfs/blobstore.py b/hgext/lfs/blobstore.py
--- a/hgext/lfs/blobstore.py
+++ b/hgext/lfs/blobstore.py
@@ -21,6 +21,7 @@ 
 from mercurial import (
     encoding,
     error,
+    httpconnection as httpconnectionmod,
     node,
     pathutil,
     pycompat,
@@ -94,28 +95,16 @@ 
         pass
 
 
-class lfsuploadfile(object):
-    """a file-like object that supports __len__ and read.
+class lfsuploadfile(httpconnectionmod.httpsendfile):
+    """a file-like object that supports keepalive.
     """
 
-    def __init__(self, fp):
-        self._fp = fp
-        fp.seek(0, os.SEEK_END)
-        self._len = fp.tell()
-        fp.seek(0)
-
-    def __len__(self):
-        return self._len
+    def __init__(self, ui, filename):
+        super(lfsuploadfile, self).__init__(ui, filename, b'rb')
+        self.read = self._data.read
 
-    def read(self, size):
-        if self._fp is None:
-            return b''
-        return self._fp.read(size)
-
-    def close(self):
-        if self._fp is not None:
-            self._fp.close()
-            self._fp = None
+    def _makeprogress(self):
+        return None  # progress is handled by the worker client
 
 
 class local(object):
@@ -507,10 +496,10 @@ 
 
         try:
             if action == b'upload':
-                request.data = lfsuploadfile(localstore.open(oid))
+                request.data = lfsuploadfile(self.ui, localstore.path(oid))
                 request.get_method = lambda: 'PUT'
                 request.add_header('Content-Type', 'application/octet-stream')
-                request.add_header('Content-Length', len(request.data))
+                request.add_header('Content-Length', request.data.length)
 
             with contextlib.closing(self.urlopener.open(request)) as res:
                 contentlength = res.info().get(b"content-length")