lengthy timeout: OK for most cases, unless on very very slow server.
monitor database threads/transactions: difficult, impractical (if even possible).
mysqlshow (or alike) loop until tables available: OK on most cases;
case: unlikely to happen on first setup, but is it possible that all tables are created but there are pending transactions (many or big INSERTs, UPDATEs) when the process is killed?
monitor creation of some file or content of a log: don’t know order of gitea events
monitor output of gitea web command or connection status to localhost:3000 (see below): OK, if assumption is correct, until upstream changes command behaviour:
$ GITEA_WORK_DIR=‘/var/lib/nethserver/gitea’ /usr/bin/gitea web --config /etc/gitea/app.ini
2018/08/26 10:22:37 [T] AppPath: /usr/bin/gitea
2018/08/26 10:22:37 [T] AppWorkPath: /var/lib/nethserver/gitea
2018/08/26 10:22:37 [T] Custom path: /var/lib/nethserver/gitea/custom
2018/08/26 10:22:37 [T] Log path: /var/log/gitea/ 2018/08/26 10:22:40 Serving [::]:3000 with pid 4204
[Macaron] 2018-08-26 10:22:43: Started GET / for 127.0.0.1
[Macaron] 2018-08-26 10:22:43: Completed GET / 200 OK in 921.834µs
I think the database has been fully populated either when gitea shows the remarked line or when localhost:3000 accepts connections (tried with curl, hence the last two lines).
If the output of the command can be piped or redirected to your script (or curl OK response), I think it can help…
# su gitea -c "GITEA_WORK_DIR='/var/lib/nethserver/gitea' /usr/bin/gitea web --config /etc/gitea/app.ini" >1
2018/08/26 10:21:47 Serving [::]:3000 with pid 4088
^C
Session terminated, killing shell...2018/08/26 10:21:51 Exiting pid 4088.
...killed.
worked after some rework (should have known better: use abs paths in bash-script!)
It took a while to realize the local repository entry in /etc/yum.repos.d/ is set to disabled during the recovery event. To be able to simulate a full test hacked the settings a bit by exchanging my local settings in NethFrog.repo.
A simple workaround for the above is simply installing nethserver-gitea before issuing the restore-data on the command-line.
Thanx this was indeed a clever way to go and is implemented!
The later this gives a few entry’s for “PASSWD” one of then should match the hash from the first command, or just look in /etc/gitea/app.ini at the [database] section.
( obvious no need to share the password)
If they do not match signal-event nethserver-gitea-update may correct this…
After doing a complete round of testing once more , the module and the gitea package it self are almost in alpha state.
Two todo’s after this i’d like a (code) review; process the remarks, if possible - and request to push it to NethForg testing…
Still need to do my “favorite” part, write some documentation. - and-
Choose a port, default upstream is port 3000. A bit of i-net searching shows it is it busy on this port.
I’d just take some not used port from a ports db like speedguide but maybe we should create a list of used ports in Nethserver to avoid “port clashes” of different Nethserver modules.
Still not able to reproduce completely, found suboptimal behavior using this procedure.
Better is to restore the configuration before installing the rpm’s. ie
Log in to server-manager, restore configuration.
Log back in to server-manager (required since IP changed after restoring configuration), install updates through software center.
scp the above two RPMs to the machine, then yum install *.rpm .
-OR-
Upload the backup-config in first config wizard
Log back in to server-manager (required since IP changed after restoring configuration), install updates through software center.
scp the above two RPMs to the machine, then yum install *.rpm .
Reason is simple restore-config restores gitea’s (former) password for mysql, installing gitea creates a (new) password and/or tables for mysql if not exists. This throws throws gitea in a bad state, of which it eventually recovers.
The order of restoring the config and installing the package is good if it is in a repo available during the config-restore event.
As a side note, this is a bit of a problem–when you upload the configuration, it reconfigures the network, meaning the connection to the server manager goes away. As a result, you can’t see when the restore/reconfiguration finishes or if there were any errors. I’m not sure if there’s a good way to avoid this, though.
Tried again with this variation on the process, and it seems to be working fine–I’m able to log in, the repos are there, I can view files in them, etc.
Now it’s up to write documentation; which obviously includes changing the password of “gitea”; although (as you probably figured out) there is an other option:
If you have given one of the users admin rights this user can remove the user gitea.
…which brings up something else that can be kind of frustrating–so many of these modules (Nextcloud, WebTop, Rainloop, etc.) have their own unique admin users. Wouldn’t it be more sensible to use the system admin user? Or better yet, anyone in the Domain Admins group? (ideal would probably be that anyone in Domain Admins or appname_admins could do it) That’s a broader issue than Gitea, of course, but it does seem to have some application here.