| Commit message (Collapse) | Author | Age | Files |
|
|
|
|
|
|
|
|
| |
(Excluding our NTP master.) It's simpler, arguably more secure, and
provides enough functionality when only simple client use-cases are
desired.
We allow outgoing connections to 123/udp also on NTP slaves so systemd-timesyncd
can connect to the fallbacks NTP servers.
|
| |
|
| |
|
|
|
|
| |
We don't need suspend-on-disk (hibernation).
|
|
|
|
| |
(A validating, recursive, caching DNS resolver.)
|
|
|
|
| |
It's become too verbose (too many false-positive)…
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use a dedicated, non-routable, IPv4 subnet for IPSec. Furthermore
the subnet is nullrouted in the absence of xfrm lookup (i.e., when there
is no matching IPSec Security Association) to avoid data leaks.
Each host is associated with an IP in that subnet (thus only reachble
within that subnet, either by the host itself or by its IPSec peers).
The peers authenticate each other using RSA public key authentication.
Kernel traps are used to ensure that connections are only established
when traffic is detected between the peers; after 30m of inactivity
(this value needs to be less than the rekeying period) the connection is
brought down and a kernel trap is installed.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Interhost communications are protected by stunnel4. The graphs are only
visible on the master itself, and content is generated by Fast CGI.
|
|
|
|
|
| |
Using client-side data signing/encryption and wrapping inter-host
communication into stunnel.
|
| |
|
|
|
|
|
|
| |
(Unless a new instance is created, or the master.cf change is modified.)
Changing some variables, such as inet_protocols, require a full restart,
but most of the time it's overkill.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Instead, generate a server certificate for each host (on the machine
itself). Then fetch all these certs locally, and copy them over to each
IPSec peer. That requires more certs to be stored on each machines (n
vs 2), but it can be done automatically, and is easier to deploy.
Note: When adding a new machine to the inventory, one needs to run the
playbook on that machine (to generate the cert and fetch it locally)
first, then on all other machines.
|
|
|
|
|
|
| |
We use a "master" NTP server, which synchronizes against stratum 1
servers (hence is a stratum 2 itself); all other clients synchronize to
this master server through IPSec.
|
| |
|
| |
|
|
|
|
| |
We use a dedicated instance for each role: MDA, MTA out, MX, etc.
|
|
|
|
| |
This is pointless since the service will be restarted anyway.
|
|
|
|
|
|
|
| |
At the each IPSec end-point the traffic is DNAT'ed to / MASQUERADE'd
from our dedicated IP after ESP decapsulation. Also, some IP tables
ensure that alien (not coming from / going to the tunnel end-point) is
dropped.
|
|
|
|
|
| |
These rules are automatically included by third-party servers such as
strongSwan or fail2ban.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|