Patchwork D3700: run-tests: add support for external test result

login
register
mail settings
Submitter phabricator
Date June 7, 2018, 7:19 p.m.
Message ID <differential-rev-PHID-DREV-zwftwzuu2s76a25b74wk-req@phab.mercurial-scm.org>
Download mbox | patch
Permalink /patch/32021/
State New
Headers show

Comments

phabricator - June 7, 2018, 7:19 p.m.
lothiraldan created this revision.
Herald added a subscriber: mercurial-devel.
Herald added a reviewer: hg-reviewers.

REVISION SUMMARY
  The goal is to begin experiment with custom test result. I'm not sure we
  should offers any backward-compatibility guarantee on that plugin API as it
  doesn't change often and shouldn't have too much clients.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3700

AFFECTED FILES
  tests/basic_test_result.py
  tests/run-tests.py
  tests/test-run-tests.t

CHANGE DETAILS




To: lothiraldan, #hg-reviewers
Cc: mercurial-devel
phabricator - June 12, 2018, 4:58 p.m.
durin42 added a comment.


  I see some what, but not any why. Why is this useful?

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3700

To: lothiraldan, #hg-reviewers
Cc: durin42, mercurial-devel
phabricator - June 12, 2018, 9:11 p.m.
lothiraldan added a comment.


  In https://phab.mercurial-scm.org/D3700#58376, @durin42 wrote:
  
  > I see some what, but not any why. Why is this useful?
  
  
  I need this changeset to integrate the mercurial test runner with some external tools.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3700

To: lothiraldan, #hg-reviewers
Cc: durin42, mercurial-devel
phabricator - June 12, 2018, 9:12 p.m.
durin42 added a comment.


  In https://phab.mercurial-scm.org/D3700#58403, @lothiraldan wrote:
  
  > In https://phab.mercurial-scm.org/D3700#58376, @durin42 wrote:
  >
  > > I see some what, but not any why. Why is this useful?
  >
  >
  > I need this changeset to integrate the mercurial test runner with some external tools.
  
  
  I'd still like more information. Why is the json report inadquate? What's your goal?
  
  (Remember my perspective: every feature here is a liability, so anything we're not using for development on Mercurial is something I'm hesitant to take on in run-tests)

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D3700

To: lothiraldan, #hg-reviewers
Cc: durin42, mercurial-devel

Patch

diff --git a/tests/test-run-tests.t b/tests/test-run-tests.t
--- a/tests/test-run-tests.t
+++ b/tests/test-run-tests.t
@@ -1202,6 +1202,15 @@ 
   $ echo dead:beef::1
   $LOCALIP (glob)
 
+Add support for external test formatter
+=======================================
+
+  $ CUSTOM_TEST_RESULT=basic_test_result $PYTHON $TESTDIR/run-tests.py --with-hg=`which hg` "$@" test-success.t test-failure.t
+  
+  # Ran 2 tests, 0 skipped, 0 failed.
+  FAILURE! test-failure.t output changed
+  SUCCESS! test-success.t
+
 Test reusability for third party tools
 ======================================
 
diff --git a/tests/run-tests.py b/tests/run-tests.py
--- a/tests/run-tests.py
+++ b/tests/run-tests.py
@@ -1852,6 +1852,16 @@ 
                 self.stream.writeln('INTERRUPTED: %s (after %d seconds)' % (
                     test.name, self.times[-1][3]))
 
+def getTestResult():
+    """
+    Returns the relevant test result
+    """
+    if "CUSTOM_TEST_RESULT" in os.environ:
+        testresultmodule = __import__(os.environ["CUSTOM_TEST_RESULT"])
+        return testresultmodule.TestResult
+    else:
+        return TestResult
+
 class TestSuite(unittest.TestSuite):
     """Custom unittest TestSuite that knows how to execute Mercurial tests."""
 
@@ -2091,8 +2101,8 @@ 
         self._runner = runner
 
     def listtests(self, test):
-        result = TestResult(self._runner.options, self.stream,
-                            self.descriptions, 0)
+        result = getTestResult()(self._runner.options, self.stream,
+                                 self.descriptions, 0)
         test = sorted(test, key=lambda t: t.name)
         for t in test:
             print(t.name)
@@ -2110,9 +2120,8 @@ 
         return result
 
     def run(self, test):
-        result = TestResult(self._runner.options, self.stream,
-                            self.descriptions, self.verbosity)
-
+        result = getTestResult()(self._runner.options, self.stream,
+                                 self.descriptions, self.verbosity)
         test(result)
 
         failed = len(result.failures)
diff --git a/tests/basic_test_result.py b/tests/basic_test_result.py
new file mode 100644
--- /dev/null
+++ b/tests/basic_test_result.py
@@ -0,0 +1,46 @@ 
+from __future__ import print_function
+
+import unittest
+
+class TestResult(unittest._TextTestResult):
+
+    def __init__(self, options, *args, **kwargs):
+        super(TestResult, self).__init__(*args, **kwargs)
+        self._options = options
+
+        # unittest.TestResult didn't have skipped until 2.7. We need to
+        # polyfill it.
+        self.skipped = []
+
+        # We have a custom "ignored" result that isn't present in any Python
+        # unittest implementation. It is very similar to skipped. It may make
+        # sense to map it into skip some day.
+        self.ignored = []
+
+        self.times = []
+        self._firststarttime = None
+        # Data stored for the benefit of generating xunit reports.
+        self.successes = []
+        self.faildata = {}
+
+    def addFailure(self, test, reason):
+        print("FAILURE!", test, reason)
+
+    def addSuccess(self, test):
+        print("SUCCESS!", test)
+
+    def addError(self, test, err):
+        print("ERR!", test, err)
+
+    # Polyfill.
+    def addSkip(self, test, reason):
+        print("ERR!", test, reason)
+
+    def addIgnore(self, test, reason):
+        print("IGNORE!", test, reason)
+
+    def addOutputMismatch(self, test, ret, got, expected):
+        return False
+
+    def stopTest(self, test, interrupted=False):
+        super(TestResult, self).stopTest(test)