Patchwork [3,of,8,"] compression: introduce a `storage.revlog.zlib.level` configuration

login
register
mail settings
Submitter Pierre-Yves David
Date March 31, 2019, 3:36 p.m.
Message ID <df7c537a8d07d6c1d4e7.1554046579@nodosa.octopoid.net>
Download mbox | patch
Permalink /patch/39424/
State Accepted
Headers show

Comments

Pierre-Yves David - March 31, 2019, 3:36 p.m.
# HG changeset patch
# User Pierre-Yves David <pierre-yves.david@octobus.net>
# Date 1553708127 -3600
#      Wed Mar 27 18:35:27 2019 +0100
# Node ID df7c537a8d07d6c1d4e7aa7604af30a57717bcf6
# Parent  0779dd6ec612bf7dcb5ca4628b42409dad2cde29
# EXP-Topic zstd-revlog
# Available At https://bitbucket.org/octobus/mercurial-devel/
#              hg pull https://bitbucket.org/octobus/mercurial-devel/ -r df7c537a8d07
compression: introduce a `storage.revlog.zlib.level` configuration

This option control the zlib compression level used when compression revlog
chunk.

This is also a good excuse to pave the way for a similar configuration option
for the zstd compression engine. Having a dedicated option for each compression
algorithm is useful because they don't support the same range of values.

Using a higher zlib compression impact CPU consumption at compression time, but
does not directly affected decompression time. However dealing with small
compressed chunk can directly help decompression and indirectly help other
revlog logic.

I ran some basic test on repositories using different level. I am user the
mercurial, pypy, netbeans and mozilla-central clone from our benchmark suite.
All tested repository use sparse-revlog and got all their delta recomputed.

The different compression level has a small effect on the repository size
(about 10% variation in the total range). My quick analysis is that revlog
mostly store small delta, that are not affected by the compression level much.
So the variation probably mostly comes from better compression of the snapshots
revisions, and snapshot revision only represent a small portion of the
repository content.

I also made some basic timing measurement. The "read" timing are gathered using
simple run of `hg perfrevlogrevisions`, the "write" measurement using `hg
perfrevlogwrite` (restricted to the last 5000 revisions for netbeans and
mozilla central). The timing are gathered on a generic machine, (not one  of
our performance locked machine), so small variation might not be meaningful.
However large trend remains relevant.

Keep in mind that these number are not pure compression/decompression time.
They also involve the full revlog logic. In particular the difference in chunk
size has an impact on the delta chain structure, affecting performance when
writing or reading them.

On read/write performance, the compression level has a bigger impact.
Counter-intuitively, higher compression level raise better "write" performance
for the large repositories in our tested setting. Maybe because the last 5000
delta chain end up having a very different shape in this specific spot? Or maybe
because of a more general trend of better delta chains thanks to the smaller
chunk and snapshot.

This series does not intend to change the default compression level. However,
these result call for a deeper analysis of this performance difference in the
future.

Full data
Josef 'Jeff' Sipek - April 2, 2019, 7:29 a.m.
On Sun, Mar 31, 2019 at 17:36:19 +0200, Pierre-Yves David wrote:
...
> compression: introduce a `storage.revlog.zlib.level` configuration
> 
> This option control the zlib compression level used when compression revlog
> chunk.
> 
> This is also a good excuse to pave the way for a similar configuration option
> for the zstd compression engine. Having a dedicated option for each compression
> algorithm is useful because they don't support the same range of values.
> 
> Using a higher zlib compression impact CPU consumption at compression time, but
> does not directly affected decompression time. However dealing with small
> compressed chunk can directly help decompression and indirectly help other
> revlog logic.
> 
> I ran some basic test on repositories using different level. I am user the

s/user/using/ ?

...
> I also made some basic timing measurement. The "read" timing are gathered using
> simple run of `hg perfrevlogrevisions`, the "write" measurement using `hg
> perfrevlogwrite` (restricted to the last 5000 revisions for netbeans and
> mozilla central). The timing are gathered on a generic machine, (not one  of
> our performance locked machine), so small variation might not be meaningful.

You did more than one measurement, so measurement -> measurements, and
timing -> timings?  Alternatively, keep the singular but then make the verbs
match: are -> is.

Sorry to nit-pick, but since this text will end up in the commit messages...
:)

> However large trend remains relevant.
> 
> Keep in mind that these number are not pure compression/decompression time.

s/number/numbers/

> They also involve the full revlog logic. In particular the difference in chunk
> size has an impact on the delta chain structure, affecting performance when
> writing or reading them.
> 
> On read/write performance, the compression level has a bigger impact.
> Counter-intuitively, higher compression level raise better "write" performance

s/raise better/increase/ ?

This actually confuses me a bit.  Based on the table below, it looks like
higher compression level has non-linear effect on read/write performance.
Maybe I'm not understanding what you meant by 'raise "better"'.

While I expect to see a "hump" in *write* performance (because high zlib
compression levels are such cpu hogs), I didn't expect to see one for *read*
perfomance.  I suppose the read hump could be explained by the shape of the
DAG, as you point out.

> for the large repositories in our tested setting. Maybe because the last 5000
> delta chain end up having a very different shape in this specific spot? Or maybe
> because of a more general trend of better delta chains thanks to the smaller
> chunk and snapshot.
> 
> This series does not intend to change the default compression level. However,
> these result call for a deeper analysis of this performance difference in the
> future.
> 
> Full data
> =========
> 
> repo   level  .hg/store size  00manifest.d read       write
> ----------------------------------------------------------------
> mercurial  1      49,402,813     5,963,475   0.170159  53.250304
> mercurial  6      47,197,397     5,875,730   0.182820  56.264320
> mercurial  9      47,121,596     5,849,781   0.189219  56.293612
> 
> pypy       1     370,830,572    28,462,425   2.679217 460.721984
> pypy       6     340,112,317    27,648,747   2.768691 467.537158
> pypy       9     338,360,736    27,639,003   2.763495 476.589918
> 
> netbeans   1   1,281,847,810   165,495,457 122.477027 520.560316
> netbeans   6   1,205,284,353   159,161,207 139.876147 715.930400
> netbeans   9   1,197,135,671   155,034,586 141.620281 678.297064
> 
> mozilla    1   2,775,497,186   298,527,987 147.867662 751.263721
> mozilla    6   2,596,856,420   286,597,671 170.572118 987.056093
> mozilla    9   2,587,542,494   287,018,264 163.622338 739.803002
...
> diff --git a/mercurial/help/config.txt b/mercurial/help/config.txt
> --- a/mercurial/help/config.txt
> +++ b/mercurial/help/config.txt
> @@ -1881,6 +1881,11 @@ category impact performance and reposito
>      This option is enabled by default. When disabled, it also disables the
>      related ``storage.revlog.reuse-external-delta-parent`` option.
>  
> +``revlog.zlib.level``
> +    Zlib compression level used when storing data into the repository. Accepted
> +    Value range from 1 (lowest compression) to 9 (highest compression). Zlib
> +    default value is 6.

I know this is very unlikely to change, but does it make sense to say what
an external libarary's defaults are?


Thanks for doing this! :)

Jeff.
Pierre-Yves David - April 2, 2019, 1:50 p.m.
On 4/2/19 9:29 AM, Josef 'Jeff' Sipek wrote:
> On Sun, Mar 31, 2019 at 17:36:19 +0200, Pierre-Yves David wrote:
> ...
>> compression: introduce a `storage.revlog.zlib.level` configuration
>>
>> This option control the zlib compression level used when compression revlog
>> chunk.
>>
>> This is also a good excuse to pave the way for a similar configuration option
>> for the zstd compression engine. Having a dedicated option for each compression
>> algorithm is useful because they don't support the same range of values.
>>
>> Using a higher zlib compression impact CPU consumption at compression time, but
>> does not directly affected decompression time. However dealing with small
>> compressed chunk can directly help decompression and indirectly help other
>> revlog logic.
>>
>> I ran some basic test on repositories using different level. I am user the
> 
> s/user/using/ ?
> 
> ...
>> I also made some basic timing measurement. The "read" timing are gathered using
>> simple run of `hg perfrevlogrevisions`, the "write" measurement using `hg
>> perfrevlogwrite` (restricted to the last 5000 revisions for netbeans and
>> mozilla central). The timing are gathered on a generic machine, (not one  of
>> our performance locked machine), so small variation might not be meaningful.
> 
> You did more than one measurement, so measurement -> measurements, and
> timing -> timings?  Alternatively, keep the singular but then make the verbs
> match: are -> is.
> 
> Sorry to nit-pick, but since this text will end up in the commit messages...
> :)
> 
>> However large trend remains relevant.
>>
>> Keep in mind that these number are not pure compression/decompression time.
> 
> s/number/numbers/
> 
>> They also involve the full revlog logic. In particular the difference in chunk
>> size has an impact on the delta chain structure, affecting performance when
>> writing or reading them.
>>
>> On read/write performance, the compression level has a bigger impact.
>> Counter-intuitively, higher compression level raise better "write" performance
> 
> s/raise better/increase/ ?
> 
> This actually confuses me a bit.  Based on the table below, it looks like
> higher compression level has non-linear effect on read/write performance.
> Maybe I'm not understanding what you meant by 'raise "better"'.
> 
> While I expect to see a "hump" in *write* performance (because high zlib
> compression levels are such cpu hogs), I didn't expect to see one for *read*
> perfomance.  I suppose the read hump could be explained by the shape of the
> DAG, as you point out.

Yes, we not doing pure compression test here. This deserve an 
independant full array of of deeper testing.


>> +``revlog.zlib.level``
>> +    Zlib compression level used when storing data into the repository. Accepted
>> +    Value range from 1 (lowest compression) to 9 (highest compression). Zlib
>> +    default value is 6.
> 
> I know this is very unlikely to change, but does it make sense to say what
> an external libarary's defaults are?

I do not understand your question.
Gregory Szorc - April 2, 2019, 6:16 p.m.
On Tue, Apr 2, 2019 at 12:29 AM Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
wrote:

> On Sun, Mar 31, 2019 at 17:36:19 +0200, Pierre-Yves David wrote:
> ...
> > compression: introduce a `storage.revlog.zlib.level` configuration
> >
> > This option control the zlib compression level used when compression
> revlog
> > chunk.
> >
> > This is also a good excuse to pave the way for a similar configuration
> option
> > for the zstd compression engine. Having a dedicated option for each
> compression
> > algorithm is useful because they don't support the same range of values.
> >
> > Using a higher zlib compression impact CPU consumption at compression
> time, but
> > does not directly affected decompression time. However dealing with small
> > compressed chunk can directly help decompression and indirectly help
> other
> > revlog logic.
> >
> > I ran some basic test on repositories using different level. I am user
> the
>
> s/user/using/ ?
>
> ...
> > I also made some basic timing measurement. The "read" timing are
> gathered using
> > simple run of `hg perfrevlogrevisions`, the "write" measurement using `hg
> > perfrevlogwrite` (restricted to the last 5000 revisions for netbeans and
> > mozilla central). The timing are gathered on a generic machine, (not
> one  of
> > our performance locked machine), so small variation might not be
> meaningful.
>
> You did more than one measurement, so measurement -> measurements, and
> timing -> timings?  Alternatively, keep the singular but then make the
> verbs
> match: are -> is.
>
> Sorry to nit-pick, but since this text will end up in the commit
> messages...
> :)
>
> > However large trend remains relevant.
> >
> > Keep in mind that these number are not pure compression/decompression
> time.
>
> s/number/numbers/
>
> > They also involve the full revlog logic. In particular the difference in
> chunk
> > size has an impact on the delta chain structure, affecting performance
> when
> > writing or reading them.
> >
> > On read/write performance, the compression level has a bigger impact.
> > Counter-intuitively, higher compression level raise better "write"
> performance
>
> s/raise better/increase/ ?
>
> This actually confuses me a bit.  Based on the table below, it looks like
> higher compression level has non-linear effect on read/write performance.
> Maybe I'm not understanding what you meant by 'raise "better"'.
>
> While I expect to see a "hump" in *write* performance (because high zlib
> compression levels are such cpu hogs), I didn't expect to see one for
> *read*
> perfomance.  I suppose the read hump could be explained by the shape of the
> DAG, as you point out.
>

https://gregoryszorc.com/blog/2017/03/07/better-compression-with-zstandard/
has some nice charts graphing (de)compression speed versus compression
levels for various compression formats. zlib and zstd decompression speed
should be relatively linear on the decompression side. Exception is
negative compression levels with zstd, which will be faster than positive
levels.

When Python comes into play, the read speed can be influenced by a number
of factors, including the overhead of allocating the output buffer.
Measuring performance of thousands of small operations quickly turns into
measuring the efficiency/overhead of the Python bindings to the compression
library and Python itself. One of the things we do with zstd revlogs is
reuse the decompression context (self._dctx), whereas zlib revlogs call
zlib.decompress() on every item. There is undoubtedly some overhead to
constructing those objects and constructing decompressors to support large
window sizes (read: higher compression levels) may introduce additional
overhead. I dunno.

FWIW the `hg perfrevlogchunks` command isolates the various "stages" of
reading from revlogs and can be very useful for isolating the compression
performance. It even has separate benchmarks for measuring zlib versus zstd
compression!


>
> > for the large repositories in our tested setting. Maybe because the last
> 5000
> > delta chain end up having a very different shape in this specific spot?
> Or maybe
> > because of a more general trend of better delta chains thanks to the
> smaller
> > chunk and snapshot.
> >
> > This series does not intend to change the default compression level.
> However,
> > these result call for a deeper analysis of this performance difference
> in the
> > future.
> >
> > Full data
> > =========
> >
> > repo   level  .hg/store size  00manifest.d read       write
> > ----------------------------------------------------------------
> > mercurial  1      49,402,813     5,963,475   0.170159  53.250304
> > mercurial  6      47,197,397     5,875,730   0.182820  56.264320
> > mercurial  9      47,121,596     5,849,781   0.189219  56.293612
> >
> > pypy       1     370,830,572    28,462,425   2.679217 460.721984
> > pypy       6     340,112,317    27,648,747   2.768691 467.537158
> > pypy       9     338,360,736    27,639,003   2.763495 476.589918
> >
> > netbeans   1   1,281,847,810   165,495,457 122.477027 520.560316
> > netbeans   6   1,205,284,353   159,161,207 139.876147 715.930400
> > netbeans   9   1,197,135,671   155,034,586 141.620281 678.297064
> >
> > mozilla    1   2,775,497,186   298,527,987 147.867662 751.263721
> > mozilla    6   2,596,856,420   286,597,671 170.572118 987.056093
> > mozilla    9   2,587,542,494   287,018,264 163.622338 739.803002
> ...
> > diff --git a/mercurial/help/config.txt b/mercurial/help/config.txt
> > --- a/mercurial/help/config.txt
> > +++ b/mercurial/help/config.txt
> > @@ -1881,6 +1881,11 @@ category impact performance and reposito
> >      This option is enabled by default. When disabled, it also disables
> the
> >      related ``storage.revlog.reuse-external-delta-parent`` option.
> >
> > +``revlog.zlib.level``
> > +    Zlib compression level used when storing data into the repository.
> Accepted
> > +    Value range from 1 (lowest compression) to 9 (highest compression).
> Zlib
> > +    default value is 6.
>
> I know this is very unlikely to change, but does it make sense to say what
> an external libarary's defaults are?
>
>
> Thanks for doing this! :)
>
> Jeff.
>
> --
> Reality is merely an illusion, albeit a very persistent one.
>                 - Albert Einstein
>

Patch

=========

repo   level  .hg/store size  00manifest.d read       write
----------------------------------------------------------------
mercurial  1      49,402,813     5,963,475   0.170159  53.250304
mercurial  6      47,197,397     5,875,730   0.182820  56.264320
mercurial  9      47,121,596     5,849,781   0.189219  56.293612

pypy       1     370,830,572    28,462,425   2.679217 460.721984
pypy       6     340,112,317    27,648,747   2.768691 467.537158
pypy       9     338,360,736    27,639,003   2.763495 476.589918

netbeans   1   1,281,847,810   165,495,457 122.477027 520.560316
netbeans   6   1,205,284,353   159,161,207 139.876147 715.930400
netbeans   9   1,197,135,671   155,034,586 141.620281 678.297064

mozilla    1   2,775,497,186   298,527,987 147.867662 751.263721
mozilla    6   2,596,856,420   286,597,671 170.572118 987.056093
mozilla    9   2,587,542,494   287,018,264 163.622338 739.803002

diff --git a/mercurial/configitems.py b/mercurial/configitems.py
--- a/mercurial/configitems.py
+++ b/mercurial/configitems.py
@@ -992,6 +992,9 @@  coreconfigitem('storage', 'revlog.reuse-
 coreconfigitem('storage', 'revlog.reuse-external-delta-parent',
     default=None,
 )
+coreconfigitem('storage', 'revlog.zlib.level',
+    default=None,
+)
 coreconfigitem('server', 'bookmarks-pushkey-compat',
     default=True,
 )
diff --git a/mercurial/help/config.txt b/mercurial/help/config.txt
--- a/mercurial/help/config.txt
+++ b/mercurial/help/config.txt
@@ -1881,6 +1881,11 @@  category impact performance and reposito
     This option is enabled by default. When disabled, it also disables the
     related ``storage.revlog.reuse-external-delta-parent`` option.
 
+``revlog.zlib.level``
+    Zlib compression level used when storing data into the repository. Accepted
+    Value range from 1 (lowest compression) to 9 (highest compression). Zlib
+    default value is 6.
+
 ``server``
 ----------
 
diff --git a/mercurial/localrepo.py b/mercurial/localrepo.py
--- a/mercurial/localrepo.py
+++ b/mercurial/localrepo.py
@@ -797,6 +797,12 @@  def resolverevlogstorevfsoptions(ui, req
         if r.startswith(b'exp-compression-'):
             options[b'compengine'] = r[len(b'exp-compression-'):]
 
+    options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
+    if options[b'zlib.level'] is not None:
+        if not (0 <= options[b'zlib.level'] <= 9):
+            msg = _('invalid value for `storage.revlog.zlib.level` config: %d')
+            raise error.Abort(msg % options[b'zlib.level'])
+
     if repository.NARROW_REQUIREMENT in requirements:
         options[b'enableellipsis'] = True
 
diff --git a/mercurial/revlog.py b/mercurial/revlog.py
--- a/mercurial/revlog.py
+++ b/mercurial/revlog.py
@@ -371,6 +371,7 @@  class revlog(object):
         self._nodecache = {nullid: nullrev}
         self._nodepos = None
         self._compengine = 'zlib'
+        self._compengineopts = {}
         self._maxdeltachainspan = -1
         self._withsparseread = False
         self._sparserevlog = False
@@ -416,6 +417,8 @@  class revlog(object):
             self._lazydeltabase = bool(opts.get('lazydeltabase', False))
         if 'compengine' in opts:
             self._compengine = opts['compengine']
+        if 'zlib.level' in opts:
+            self._compengineopts['zlib.level'] = opts['zlib.level']
         if 'maxdeltachainspan' in opts:
             self._maxdeltachainspan = opts['maxdeltachainspan']
         if self._mmaplargeindex and 'mmapindexthreshold' in opts:
@@ -526,7 +529,8 @@  class revlog(object):
 
     @util.propertycache
     def _compressor(self):
-        return util.compengines[self._compengine].revlogcompressor()
+        engine = util.compengines[self._compengine]
+        return engine.revlogcompressor(self._compengineopts)
 
     def _indexfp(self, mode='r'):
         """file object for the revlog's index file"""
@@ -1981,7 +1985,7 @@  class revlog(object):
         except KeyError:
             try:
                 engine = util.compengines.forrevlogheader(t)
-                compressor = engine.revlogcompressor()
+                compressor = engine.revlogcompressor(self._compengineopts)
                 self._decompressors[t] = compressor
             except KeyError:
                 raise error.RevlogError(_('unknown compression type %r') % t)
diff --git a/mercurial/utils/compression.py b/mercurial/utils/compression.py
--- a/mercurial/utils/compression.py
+++ b/mercurial/utils/compression.py
@@ -505,7 +505,10 @@  class _zlibengine(compressionengine):
                                          stringutil.forcebytestr(e))
 
     def revlogcompressor(self, opts=None):
-        return self.zlibrevlogcompressor()
+        level = None
+        if opts is not None:
+            level = opts.get('zlib.level')
+        return self.zlibrevlogcompressor(level)
 
 compengines.register(_zlibengine())
 
diff --git a/tests/test-repo-compengines.t b/tests/test-repo-compengines.t
--- a/tests/test-repo-compengines.t
+++ b/tests/test-repo-compengines.t
@@ -82,3 +82,59 @@  with that engine or a requirement
       0x78 (x)  : 199 (100.00%)
 
 #endif
+
+checking zlib options
+=====================
+
+  $ hg init zlib-level-default
+  $ hg init zlib-level-1
+  $ cat << EOF >> zlib-level-1/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=1
+  > EOF
+  $ hg init zlib-level-9
+  $ cat << EOF >> zlib-level-9/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=9
+  > EOF
+
+
+  $ commitone() {
+  >    repo=$1
+  >    cp $RUNTESTDIR/bundles/issue4438-r1.hg $repo/a
+  >    hg -R $repo add $repo/a
+  >    hg -R $repo commit -m some-commit
+  > }
+
+  $ for repo in zlib-level-default zlib-level-1 zlib-level-9; do
+  >     commitone $repo
+  > done
+
+  $ $RUNTESTDIR/f -s */.hg/store/data/*
+  zlib-level-1/.hg/store/data/a.i: size=4146
+  zlib-level-9/.hg/store/data/a.i: size=4138
+  zlib-level-default/.hg/store/data/a.i: size=4138
+
+Test error cases
+
+  $ hg init zlib-level-invalid
+  $ cat << EOF >> zlib-level-invalid/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=foobar
+  > EOF
+  $ commitone zlib-level-invalid
+  abort: storage.revlog.zlib.level is not a valid integer ('foobar')
+  abort: storage.revlog.zlib.level is not a valid integer ('foobar')
+  [255]
+
+  $ hg init zlib-level-out-of-range
+  $ cat << EOF >> zlib-level-out-of-range/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=42
+  > EOF
+
+  $ commitone zlib-level-out-of-range
+  abort: invalid value for `storage.revlog.zlib.level` config: 42
+  abort: invalid value for `storage.revlog.zlib.level` config: 42
+  [255]
+