Setup Reverse Proxy and CA server
c2d-rproxy1
. This node is a prerequisite for a functional development environment because it performs several roles.Categories:
This how-to describes how the reverse proxy and Certificate Authority ( CA ) server c2d-rproxy1
is created. This node performs several roles in this project. It is a reverse proxy and it also performs a crucial secondary role as our own small CA so we can create private keys, certificate signing requests, and certificates to facilitate secure communication between nodes / components.
Quick setup
You can create c2d-rproxy1 with vagrant up c2d-rproxy1
. If you run that command Vagrant will create the node using the LXD provider and then the Ansible provisioner will run three plays.
To perform a simple setup and run these three plays:
unset PLAY # ensure all plays will run
vagrant up c2d-rproxy1
The plays to run for each node are configured in Vagrantfile.yml. Each node has a plays
variable which contains a list of plays to run. It can also be done in separate steps using PLAY
variable.
If you want to provision this node in separate steps, first create the node without provision:
vagrant up c2d-rproxy1 --no-provision
Setup using setup play
PLAY=setup vagrant up c2d-rproxy1
The setup play includes all plays same as the site. In the context of this how-to both will execute the same tasks / steps. This is because these automation involved is easy. With more complicated products like for example the ForgeRock tools a setup play will be fundamentally different from a site play.
Setup
CA server
Note the contents of the .ca
folder. This folder contains the CA files to sign our certificates.
.ca/
└── c2
├── c2.crt
├── c2.csr
└── c2.key
Note: you can remove this .ca
folder, it will be re-created when you provision the node. The reason why we keep this folder in Git is that we don’t want to have to re-import the CA certificate each time we recreate this node.
Now we can provision the CA server.
export PLAY=plays/core/cacerts_server.yml
vagrant provision c2d-rproxy1
The log TODO shows that tasks in various roles e.g bootstrap, os_trusts are executed but for this how-to only cacerts2 role is important. It show three tasks to create the three files in the .ca
directory. Simple enough.
The configuration for the CA server is in group_vars/all/smallca.yml
cacerts2_ca_server: "{{ groups['cacerts_server'][0] }}"
c2_cacerts2_ca_dir:
default: /etc/ownca
development: /vagrant/.ca
cacerts2_ca_dir: "{{ c2_cacerts2_ca_dir[c2_env]|default(c2_cacerts2_ca_dir['default']) }}"
cacerts2_ca_domain:
common_name: c2
cipher: auto
passphrase: "{{ c2_cacerts2_ca_domain_passphrase }}" # secret see vault
create: ['key','csr', 'crt', 'p12', 'pem']
The variable c2_env
is defined in group_vars/development.yml. Variables prefixed with c2_
are project variables and not role variables. See Variables naming. Note another project variable c2_cacerts2_ca_dir
. This is used to define /vagrant/.ca
as cacerts2_ca_dir
for “development”. This is the default mount point for Vagrant in each node see the Vagrantfile which has the following line.
config.vm.synced_folder '.', '/vagrant'
So in this Vagrant based “development” environment the CA files created on node c2d-rproxy1 actually end up on the host via /vagrant
mount so they can be stored in git.
Reverse proxy
Now we “provision” the reverse proxy. Note: we use vagrant provision
now and not vagrant up
because the node has already been created. See vagrant help
.
export PLAY=mw/reverse_proxy
vagrant provision c2d-rproxy1
The log shows new tasks of cacerts2 being executed for example
TASK [c2platform.core.cacerts2 : Create dir for key, crt etc] ******************
changed: [c2d-rproxy1 -> c2d-rproxy1]
This task is “delegated” to the CA server which happens to be - in this project - the same node, so it shows up as c2d-rproxy1 -> c2d-rproxy1. The .ca
folder now has the files and structure shown below. Key and certificates are created on the CA server so they can be reused.
.ca
└── c2
├── apache
│ ├── c2-c2d-rproxy1.crt
│ ├── c2-c2d-rproxy1.csr
│ ├── c2-c2d-rproxy1.key
│ ├── c2-c2d-rproxy1.p12
│ └── c2-c2d-rproxy1.pem
├── c2.crt
├── c2.csr
└── c2.key
The certificate creation is driven by the configuration in group_vars/reverse_proxy/certs.yml.
apache_cacerts2_certificates:
- common_name: c2
subject_alt_name:
- "DNS:{{ c2_domain_name }}"
- "DNS:{{ c2_env }}.{{ c2_domain_name }}"
- "DNS:www.{{ c2_domain_name }}"
- "DNS:www.{{ c2_env }}.{{ c2_domain_name }}"
- "DNS:{{ c2_domain_name_helloworld }}"
- "DNS:{{ c2_env }}.{{ c2_domain_name_helloworld }}"
- "DNS:www.{{ c2_domain_name_helloworld }}"
- "DNS:www.{{ c2_env }}.{{ c2_domain_name_helloworld }}"
- "DNS:{{ ansible_hostname }}"
- "DNS:{{ ansible_fqdn }}"
- "IP:{{ ansible_eth1.ipv4.address }}"
ansible_group: reverse_proxy
deploy:
key:
dir: /etc/ssl/private
owner: "{{ apache_owner }}"
group: "{{ apache_group }}"
mode: '640'
crt:
dir: /etc/ssl/certs
owner: "{{ apache_owner }}"
group: "{{ apache_group }}"
mode: '644'
The deploy
variable is used to configure where key, certificates will be placed. If you run command you can see that the key and certificate are there.
vagrant ssh c2d-rproxy1 -c 'sudo ls /etc/ssl/private /etc/ssl/certs | grep rproxy1'
Example output
[:ansible-dev]└3 master(+73/-150)* ± vagrant ssh c2d-rproxy1 -c 'sudo ls /etc/ssl/private /etc/ssl/certs | grep rproxy1'
c2-c2d-rproxy1.crt
c2-c2d-rproxy1.key
Connection to 10.176.104.153 closed.
The certificate and key are used to configure an Apache VirtualHost
. This is in group_vars/reverse_proxy/files.ymls. Note the lines below are in that file.
SSLCertificateKeyFile {{ apache_cacerts2_certificates[0]['deploy']['key']['dest'] }}
SSLCertificateFile {{ apache_cacerts2_certificates[0]['deploy']['crt']['dest'] }}
The dest
key is created by cacerts2 role using c2platform.core.set_certificate_facts module. If you don’t like the path generated by this module, you can put in your own path
deploy:
key:
dest: /etc/apache2/my.key
dir: /etc/apache2/
Full example
apache_cacerts2_certificates:
- common_name: c2
subject_alt_name:
- "DNS:{{ c2_domain_name }}"
- "DNS:{{ c2_env }}.{{ c2_domain_name }}"
- "DNS:www.{{ c2_domain_name }}"
- "DNS:www.{{ c2_env }}.{{ c2_domain_name }}"
- "DNS:{{ c2_domain_name_helloworld }}"
- "DNS:{{ c2_env }}.{{ c2_domain_name_helloworld }}"
- "DNS:www.{{ c2_domain_name_helloworld }}"
- "DNS:www.{{ c2_env }}.{{ c2_domain_name_helloworld }}"
- "DNS:{{ ansible_hostname }}"
- "DNS:{{ ansible_fqdn }}"
- "IP:{{ ansible_eth1.ipv4.address }}"
ansible_group: reverse_proxy
deploy:
key:
dest: /etc/apache2/my.key
dir: /etc/apache2/
owner: "{{ apache_owner }}"
group: "{{ apache_group }}"
mode: '640'
crt:
dir: /etc/ssl/certs
owner: "{{ apache_owner }}"
group: "{{ apache_group }}"
mode: '644'
Setup DNS
export PLAY=plays/mw/dnsmasq.yml
vagrant provision c2d-rproxy1
For more information see How-to DNS
Verify
Run the command vagrant ssh c2d-rproxy1 -c 'curl https://c2platform.org/is-alive --insecure'
to verify Apache2 is up and running.
[:ansible-dev]└3 master(+1/-1) 60 ± vagrant ssh c2d-rproxy1 -c 'curl https://c2platform.org/is-alive --insecure'
Apache is aliveConnection to 10.176.104.153 closed.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.