This is an update to the post I made a week earlier. Issue got no reaction whatsoever on GitHub so far.

For short, The original issue described the fact, that running the associated Ansible Galaxy role would fail on every subsequent run on Arch based systems, because of the duplicate PID directive. I was calling for some kind of fix to be adopted upstream.

Although the role has a mature templating mechanism documented fairly well, and we are highly encouraged to make a proper use of it, I stated in the post and in the issue thread that it is not sufficient for overcoming this bug, so I have discussed in the post roughly how the role can be forked and the template modified.

Types hash max size option

Over time, I have discovered another issue regarding another Nginx configuration option named types_hash_max_size. The issue is impersonating itself through Nginx related warning via nginx -t or in the systemd journal:

[warn] could not build optimal types_hash, you should increase either types_hash_max_size: 1024 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size

Of course the solution, apart from the fact a warning itself is being pretty helpful and verbose already, is also documented on the Arch wiki:

http {
    types_hash_max_size 4096;
    server_names_hash_bucket_size 128;

As with the original pid directive issue, the role templating variables do not cover solution directly, as there is only nginx_server_names_hash_bucket_size template variable and no predefined one for types_hash_max_size. Compared to the problematic pid directive, there are some important differences:

  • This problem will not prevent subsequent role runs
  • This problem does not require removal of lines from the template, only addition (backward compatible)
  • This problem can be easily solved via nginx_extra_http_options variable

Let's explore the third option.

Extra http options role variable

If not used to jinja2 templating mechanism yet or missed to entry in the role documentation, adjusting both the hash size and the bucket size via template variables to match the values recommended above is possible in the following fashion as a minimal playbook example:

- hosts: my_hosts
    - { role: geerlingguy.nginx }
    nginx_conf_template: "{{ playbook_dir }}/templates/nginx.conf.j2"
    nginx_server_names_hash_bucket_size: "128"
    nginx_extra_http_options: |
      types_hash_max_size 4096;

The default bucket size here is 64 and a line increasing it to 128 can even be omitted as it does not prevent any immediate warning, but there could prevent causing runtime warnings like client intended to send too large body later.

Also, this setup puts both these options quite apart in the resulting configuration file, which is not optimal, but with ansible, one should not really touch the resulting files anyway. Still, they could be placed together if someone is later reading the file.

Why not just edit already forked template?

If I wanted to have both variables placed together in the resulting file, I would need to edit the template by either removing the nginx_server_names_hash_bucket_size line entirely or introduce another variable for types_hash_max_size, because just putting it inside nginx_extra_http_options obviously results in an emergency:

[emerg] 6175#6175: "server_names_hash_bucket_size" directive is duplicate in /etc/nginx/nginx.conf:40

However, I decided not to edit the template further. Until there is no upstream change, I still have to maintain my own forked version of the templates/nginx.conf.j2 referenced in the above playbook example in nginx_conf_template variable, because of the pid issue. Forking is always a double-edged sword. It allows to solve a problem at hand immediately, but at the same time, it requires more work pulling the upstream changes, where especially important are changes related to bug-fixes.

I decided to keep the changes in the forked template to an absolute minimum as it increases the changes they get adopted upstream and instead of adding or removing variables or blocks from forked template, I am using the method described in the above playbook as is for now.

This is a 46th post of #100daystooffload.