feat: add ability to run build process to remote instances (#118)

This commit is contained in:
2023-12-13 15:38:51 +02:00
committed by GitHub
parent f2f6f6df70
commit 5ddc08fce7
39 changed files with 1062 additions and 73 deletions

View File

@ -1,6 +1,14 @@
ahriman.application.application package
=======================================
Subpackages
-----------
.. toctree::
:maxdepth: 4
ahriman.application.application.workers
Submodules
----------

View File

@ -0,0 +1,37 @@
ahriman.application.application.workers package
===============================================
Submodules
----------
ahriman.application.application.workers.local\_updater module
-------------------------------------------------------------
.. automodule:: ahriman.application.application.workers.local_updater
:members:
:no-undoc-members:
:show-inheritance:
ahriman.application.application.workers.remote\_updater module
--------------------------------------------------------------
.. automodule:: ahriman.application.application.workers.remote_updater
:members:
:no-undoc-members:
:show-inheritance:
ahriman.application.application.workers.updater module
------------------------------------------------------
.. automodule:: ahriman.application.application.workers.updater
:members:
:no-undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: ahriman.application.application.workers
:members:
:no-undoc-members:
:show-inheritance:

View File

@ -252,6 +252,14 @@ ahriman.models.waiter module
:no-undoc-members:
:show-inheritance:
ahriman.models.worker module
----------------------------
.. automodule:: ahriman.models.worker
:members:
:no-undoc-members:
:show-inheritance:
Module contents
---------------

View File

@ -20,6 +20,14 @@ ahriman.web.schemas.auth\_schema module
:no-undoc-members:
:show-inheritance:
ahriman.web.schemas.build\_options\_schema module
-------------------------------------------------
.. automodule:: ahriman.web.schemas.build_options_schema
:members:
:no-undoc-members:
:show-inheritance:
ahriman.web.schemas.changes\_schema module
------------------------------------------

View File

@ -86,6 +86,7 @@ Build related configuration. Group name can refer to architecture, e.g. ``build:
* ``triggers`` - list of ``ahriman.core.triggers.Trigger`` class implementation (e.g. ``ahriman.core.report.ReportTrigger ahriman.core.upload.UploadTrigger``) which will be loaded and run at the end of processing, space separated list of strings, optional. You can also specify triggers by their paths, e.g. ``/usr/lib/python3.10/site-packages/ahriman/core/report/report.py.ReportTrigger``. Triggers are run in the order of mention.
* ``triggers_known`` - optional list of ``ahriman.core.triggers.Trigger`` class implementations which are not run automatically and used only for trigger discovery and configuration validation.
* ``vcs_allowed_age`` - maximal age in seconds of the VCS packages before their version will be updated with its remote source, integer, optional, default ``604800``.
* ``workers`` - list of worker nodes addresses used for build process, space separated list of strings, optional. Each worker address must be valid and reachable url, e.g. ``https://10.0.0.1:8080``. If none set, the build process will be run on the current node.
``repository`` group
--------------------

View File

@ -1022,6 +1022,116 @@ This action must be done in two steps:
#. Remove package on worker.
#. Remove package on master node.
Delegate builds to remote workers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This setup heavily uses upload feature described above and, in addition, also delegates build process automatically to build machines. Same as above, there must be at least two instances available (``master`` and ``worker``), however, all ``worker`` nodes must be run in the web service mode.
Master node configuration
"""""""""""""""""""""""""
In addition to the configuration above, the worker list must be defined in configuration file (``build.workers`` option), i.e.:
.. code-block:: ini
[build]
workers = https://worker1.example.com https://worker2.example.com
[web]
enable_archive_upload = yes
wait_timeout = 0
In the example above, ``https://worker1.example.com`` and ``https://worker2.example.com`` are remote ``worker`` node addresses available for ``master`` node.
In case if authentication is required (which is recommended way to setup it), it can be set by using ``status`` section as usual.
Worker nodes configuration
""""""""""""""""""""""""""
It is required to point to the master node repository, otherwise internal dependencies will not be handled correctly. In order to do so, the ``--server`` argument (or ``AHRIMAN_REPOSITORY_SERVER`` environment variable for docker images) can be used.
Also, in case if authentication is enabled, the same user with the same password must be created for all workers.
It is also recommended to set ``web.wait_timeout`` to infinte in case of multiple conflicting runs.
Other settings are the same as mentioned above.
Triple node minimal docker example
""""""""""""""""""""""""""""""""""
In this example, all instances are run on the same machine with address ``172.17.0.1`` with ports available outside of container. Master node config (``master.ini``) as:
.. code-block:: ini
[auth]
target = mapping
[status]
username = builder-user
password = very-secure-password
[build]
workers = http://172.17.0.1:8081 http://172.17.0.1:8082
[web]
enable_archive_upload = yes
wait_timeout = 0
Command to run master node:
.. code-block:: shell
docker run --privileged -p 8080:8080 -e AHRIMAN_PORT=8080 -v master.ini:/etc/ahriman.ini.d/overrides.ini arcan1s/ahriman:latest web
Worker nodes (applicable for all workers) config (``worker.ini``) as:
.. code-block:: ini
[auth]
target = mapping
[status]
address = http://172.17.0.1:8080
username = builder-user
password = very-secure-password
[upload]
target = remote-service
[remote-service]
[report]
target = remote-call
[remote-call]
manual = yes
wait_timeout = 0
[build]
triggers = ahriman.core.upload.UploadTrigger ahriman.core.report.ReportTrigger
Command to run worker nodes (considering there will be two workers, one is on ``8081`` port and other is on ``8082``):
.. code-block:: ini
docker run --privileged -p 8081:8081 -e AHRIMAN_PORT=8081 -v worker.ini:/etc/ahriman.ini.d/overrides.ini arcan1s/ahriman:latest web
docker run --privileged -p 8082:8082 -e AHRIMAN_PORT=8082 -v worker.ini:/etc/ahriman.ini.d/overrides.ini arcan1s/ahriman:latest web
Unlike the previous setup, it doesn't require to mount repository root for ``worker`` nodes, because ``worker`` nodes don't use it anyway.
Addition of new package, package removal, repository update
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In all scenarios, update process must be run only on ``master`` node. Unlike the setup described above, automatic update must be enabled only for ``master`` node also.
Known limitations
"""""""""""""""""
* Workers don't support local packages. However, it is possible to build custom packages by providing sources by using ``ahriman.core.gitremote.RemotePullTrigger`` trigger.
* No dynamic nodes discovery. In case if one of worker nodes is unavailable, the build process will fail.
* No pkgrel bump on conflicts. Well, it works, however, it isn't guaranteed.
* The identical user must be created for all workers. However, the ``master`` node user can be different from this one.
Maintenance packages
--------------------