Patchwork D3267: repository: define new interface for running commands

login
register
mail settings
Submitter phabricator
Date April 11, 2018, 11:43 p.m.
Message ID <differential-rev-PHID-DREV-eflwwgultxmz6dqkwpou-req@phab.mercurial-scm.org>
Download mbox | patch
Permalink /patch/30760/
State Superseded
Headers show

Comments

phabricator - April 11, 2018, 11:43 p.m.
indygreg created this revision.
Herald added a subscriber: mercurial-devel.
Herald added a reviewer: hg-reviewers.

REVISION SUMMARY
  Today, the peer interface exposes methods for each command that can
  be executed. In addition, there is an iterbatch() API that allows
  commands to be issued in batches and provides an iterator over the
  results. This is a glorified wrapper around the "batch" wire command.
  
  Wire protocol version 2 supports nicer things (such as batching
  any command and out-of-order replies). It will require a more
  flexible API for executing commands.
  
  This commit introduces a new peer interface for making command
  requests. In the new world, you can't simply call a method on the
  peer to execute a command: you need to obtain an object to be used
  for executing commands. That object can be used to issue a single
  command or it can batch multiple requests. In the case of full duplex
  peers, the command may even be sent out over the wire immediately.
  
  There are no per-command methods. Instead, there is a generic
  method to call a command. The implementation can then perform domain
  specific processing for specific commands. This includes passing
  data via a specially named argument.
  
  Arguments are also passed as a dictionary instead of using **kwargs.
  While **kwargs is nicer to use, we've historically gotten into
  trouble using it because there will inevitably be a conflict between
  the name of an argument to a wire protocol command and an argument
  we want to pass into a function.
  
  Instead of a command returning a value, it returns a future which
  will resolve to a value. This opens the door for out-of-order
  response handling and concurrent response handling in the version
  2 protocol.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3267

AFFECTED FILES
  mercurial/repository.py

CHANGE DETAILS




To: indygreg, #hg-reviewers
Cc: mercurial-devel
phabricator - April 12, 2018, 1 a.m.
indygreg added a comment.


  After coding up a handful of patches to switch to the new interface, I concede that I like the use of methods on the existing API. However, it is important for protocol version 2 that the peer interface switch to futures. I'm very reluctant to change the return value of peer APIs because that would likely be a difficult API break. I would be OK with exposing attributes on the command executor instance that magically route to commands. That's how the existing `iterbatch()` interface works. While the code is a bit wonky, it would work.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3267

To: indygreg, #hg-reviewers
Cc: mercurial-devel
phabricator - April 13, 2018, 5:23 p.m.
indygreg planned changes to this revision.
indygreg added a comment.


  I'll be making small revisions to the interface docs and implementation.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3267

To: indygreg, #hg-reviewers
Cc: mercurial-devel
phabricator - April 13, 2018, 7:20 p.m.
indygreg added a comment.


  We now send non-batchable commands immediately and wrap queued futures so `result()` will trigger submission. This improves the end-user experience from:
  
    with peer.commandexecutor() as e:
        f = e.callcommand(...)
    
    res = f.result()
  
  to
  
    with peer.commandexecutor() as e:
        res = e.callcommand(...).result()
  
  And in order to preserve streaming on batch responses:
  
    with peer.commandexecutor() as e:
        fs = []
        for x in ...:
            fs.append(...)
    
        e.sendcommands()
        for f in fs:
            result = f.result()
  
  to
  
    with peer.commandexecutor() as e:
        fs = []
        for x in ...:
            fs.append(e.callcommand(...))
    
        for f in fs:
            result = f.result()
  
  This later improvement is because the first `result()` call on any returned future will trigger submission of all queued commands. This also means we can iterate or access the futures in any order. Of course, wire protocol version 1 will resolve them in the order they were issued. But this isn't necessarily true about wire protocol version 2 :)

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3267

To: indygreg, #hg-reviewers
Cc: mercurial-devel

Patch

diff --git a/mercurial/repository.py b/mercurial/repository.py
--- a/mercurial/repository.py
+++ b/mercurial/repository.py
@@ -196,6 +196,78 @@ 
     def changegroupsubset(bases, heads, kind):
         pass
 
+class ipeercommandexecutor(zi.Interface):
+    """Represents a mechanism to execute remote commands.
+
+    This is the primary interface for requesting that wire protocol commands
+    be executed. Instances of this interface are active in a context manager
+    and have a well-defined lifetime. When the context manager exits, all
+    outstanding requests are waited on.
+    """
+
+    def callcommand(name, args):
+        """Request that a named command be executed.
+
+        Receives the command name and a dictionary of command arguments.
+
+        Returns a ``concurrent.futures.Future`` that will resolve to the
+        result of that command request. That exact value is left up to
+        the implementation and possibly varies by command.
+
+        The command may or may not be sent over the wire immediately.
+        """
+
+    def sendcommands():
+        """Trigger submission of queued command requests.
+
+        Not all transports submit commands as soon as they are requested to
+        run. When called, this method forces queued command requests to be
+        issued. It will no-op if all commands have already been sent.
+
+        When called, no more new commands may be issued with this executor.
+        """
+
+    def close():
+        """Signal that this command request is finished.
+
+        When called, no more new commands may be issued. All outstanding
+        commands that have previously been issued are waited on before
+        returning. This not only includes waiting for the futures to resolve,
+        but also waiting for all response data to arrive. In other words,
+        calling this waits for all on-wire state for issued command requests
+        to finish.
+
+        When used as a context manager, this method is called when exiting the
+        context manager.
+
+        This method may call ``sendcommands()`` if there are buffered commands.
+        """
+
+class ipeerrequests(zi.Interface):
+    """Interface for executing commands on a peer."""
+
+    def commandexecutor():
+        """A context manager that resolves to an ipeercommandexecutor.
+
+        The object this resolves to can be used to issue command requests
+        to the peer.
+
+        Callers should call its ``callcommand`` method to issue command
+        requests.
+
+        A new executor should be obtained for each distinct set of commands
+        (possibly just a single command) that the consumer wants to execute
+        as part of a single operation or round trip. This is because some
+        peers are half-duplex and/or don't support persistent connections.
+        e.g. in the case of HTTP peers, commands sent to an executor represent
+        a single HTTP request. While some peers may support multiple command
+        sends over the wire per executor, consumers need to code to the least
+        capable peer. So it should be assumed that command executors buffer
+        called commands until they are told to send them and that each
+        command executor could result in a new connection or wire-level request
+        being issued.
+        """
+
 class ipeerbase(ipeerconnection, ipeercapabilities, ipeercommands):
     """Unified interface for peer repositories.