Patchwork D11280: fix: reduce number of tool executions

login
register
mail settings
Submitter phabricator
Date Aug. 12, 2021, 2:16 a.m.
Message ID <differential-rev-PHID-DREV-pys56iijswy6gemwo2uf-req@mercurial-scm.org>
Download mbox | patch
Permalink /patch/49594/
State New
Headers show

Comments

phabricator - Aug. 12, 2021, 2:16 a.m.
hooper created this revision.
Herald added a reviewer: hg-reviewers.
Herald added a subscriber: mercurial-patches.

REVISION SUMMARY
  By grouping together (path, ctx) pairs according to the inputs they would
  provide to fixer tools, we can deduplicate executions of fixer tools to
  significantly reduce the amount of time spent running slow tools.
  
  This change does not handle clean files in the working copy, which could still
  be deduplicated against the files in the checked out commit. It's a little
  harder to do that because the filerev is not available in the workingfilectx
  (and it doesn't exist for added files).
  
  Anecdotally, this change makes some real uses cases at Google 10x faster. I
  think we were originally hesitant to do this because the benefits weren't
  obvious, and implementing it efficiently is kind of tricky. If we simply
  memoized the formatter execution function, we would be keeping tons of file
  content in memory.

REPOSITORY
  rHG Mercurial

BRANCH
  stable

REVISION DETAIL
  https://phab.mercurial-scm.org/D11280

AFFECTED FILES
  hgext/fix.py
  tests/test-fix.t

CHANGE DETAILS




To: hooper, #hg-reviewers
Cc: mercurial-patches, mercurial-devel

Patch

diff --git a/tests/test-fix.t b/tests/test-fix.t
--- a/tests/test-fix.t
+++ b/tests/test-fix.t
@@ -1758,8 +1758,8 @@ 
   $ cat $LOGFILE | sort | uniq -c
         4 bar.log
         4 baz.log
-        4 foo.log
-        4 qux.log
+        3 foo.log
+        2 qux.log
 
   $ cd ..
 
diff --git a/hgext/fix.py b/hgext/fix.py
--- a/hgext/fix.py
+++ b/hgext/fix.py
@@ -283,20 +283,29 @@ 
         # There are no data dependencies between the workers fixing each file
         # revision, so we can use all available parallelism.
         def getfixes(items):
-            for rev, path in items:
-                ctx = repo[rev]
+            for srcrev, path, dstrevs in items:
+                ctx = repo[srcrev]
                 olddata = ctx[path].data()
                 metadata, newdata = fixfile(
-                    ui, repo, opts, fixers, ctx, path, basepaths, basectxs[rev]
+                    ui,
+                    repo,
+                    opts,
+                    fixers,
+                    ctx,
+                    path,
+                    basepaths,
+                    basectxs[srcrev],
                 )
-                # Don't waste memory/time passing unchanged content back, but
-                # produce one result per item either way.
-                yield (
-                    rev,
-                    path,
-                    metadata,
-                    newdata if newdata != olddata else None,
-                )
+                # We ungroup the work items now, because the code that consumes
+                # these results has to handle each dstrev separately, and in
+                # topological order. Because these are handled in topological
+                # order, it's important that we pass around references to
+                # "newdata" instead of copying it. Otherwise, we would be
+                # keeping more copies of file content in memory at a time than
+                # if we hadn't bothered to group/deduplicate the work items.
+                data = newdata if newdata != olddata else None
+                for dstrev in dstrevs:
+                    yield (dstrev, path, metadata, data)
 
         results = worker.worker(
             ui, 1.0, getfixes, tuple(), workqueue, threadsafe=False
@@ -392,7 +401,7 @@ 
     items by ascending revision number to match the order in which we commit
     the fixes later.
     """
-    workqueue = []
+    dstrevmap = collections.defaultdict(list)
     numitems = collections.defaultdict(int)
     maxfilesize = ui.configbytes(b'fix', b'maxfilesize')
     for rev in sorted(revstofix):
@@ -410,8 +419,13 @@ 
                     % (util.bytecount(maxfilesize), path)
                 )
                 continue
-            workqueue.append((rev, path))
+            baserevs = tuple(ctx.rev() for ctx in basectxs[rev])
+            dstrevmap[(fctx.filerev(), baserevs, path)].append(rev)
             numitems[rev] += 1
+    workqueue = [
+        (dstrevs[0], path, dstrevs)
+        for (filerev, baserevs, path), dstrevs in dstrevmap.items()
+    ]
     return workqueue, numitems
 
 
@@ -516,9 +530,9 @@ 
         return {}
 
     basepaths = {}
-    for rev, path in workqueue:
-        fixctx = repo[rev]
-        for basectx in basectxs[rev]:
+    for srcrev, path, dstrevs in workqueue:
+        fixctx = repo[srcrev]
+        for basectx in basectxs[srcrev]:
             basepath = copies.pathcopies(basectx, fixctx).get(path, path)
             if basepath in basectx:
                 basepaths[(basectx.rev(), fixctx.rev(), path)] = basepath
@@ -641,10 +655,10 @@ 
     toprefetch = set()
 
     # Prefetch the files that will be fixed.
-    for rev, path in workqueue:
-        if rev == wdirrev:
+    for srcrev, path, dstrevs in workqueue:
+        if srcrev == wdirrev:
             continue
-        toprefetch.add((rev, path))
+        toprefetch.add((srcrev, path))
 
     # Prefetch the base contents for lineranges().
     for (baserev, fixrev, path), basepath in basepaths.items():