When I clustered my HashiCorp Vault, I found that the cron-wrapper scrip that I have on GitHub would not download despite me adding it as a proxied URL to my proxy. This seemed to be due to the response exceeding the proxy buffer size.

In order to limit the access to just my projects, rather than open up a route that would allow an adversary to upload or download anything they like using GitHub, I specifically configured /loz-hurst (my profile):

- name: github/loz-hurst
  upstream: https://github.com/loz-hurst/
  description: My GitHub repositories

Due to the work I have previously done for local mirrors in my air-gapped lab, I just added a github key to local_mirror that points to the proxy in my domain group variables for the live network so it would be picked up by, for example, {{ local_mirror.github.uri | default('https://github.com') }}:

local_mirror:
  github:
    uri: http://mirror/github/

I found that this was not working and the error message I was seeing in my proxy NGINX server’s log was upstream sent too big header while reading response header from upstream. I tried disabling buffering entirely with `` but this had no effect, so instead I increased the buffer size as recommended by various sources I found by Googling this error message.

I started with an argument in the mirror role’s meta/argument_specs.yaml:

large_buffers:
  type: bool
  default: false
  description: If set to true will increase the buffer sizes for this url.

Then I altered the configuration the role passes to the webserver’s add_site entry point, in its tasks/main.yaml to include larger buffers. I did hardcode the values, until I have a use-case for needing to customise them at this level:

- name: Mirror proxies are in nginx configuration
  ansible.builtin.set_fact:
    nginx_mirror_config: >-
      {{
        nginx_mirror_config +
        [{
          'location': '/' + item.name + '/',
          'configuration': configuration,
        }]
      }}
  vars:
    configuration: |
      {% if item.upstream.startswith('https') %}
      proxy_ssl_server_name on;
      {% endif %}
      proxy_pass {{ item.upstream }};
      {% if item.large_buffers | default(false) %}
      proxy_buffer_size 128k;
      proxy_buffers 4 256k;
      proxy_busy_buffers_size 256k;
      {% endif %}
  loop: '{{ mirror_proxies }}'

This fixed the problem. I also discovered that because I had not added force: true to the ansible.builtin.copy module, the existing configuration did not get updated if it changed. I added this to all configurations created by the webserver role’s various tasks.