docs: adding networking info to docs.
This commit is contained in:
parent
f0abff9f6a
commit
2a7057230b
|
@ -12,6 +12,7 @@ bladerunner
|
|||
|
||||
intro
|
||||
dev
|
||||
network
|
||||
node-provisioning
|
||||
packer
|
||||
tools
|
||||
|
|
|
@ -43,6 +43,11 @@ Below is a diagram of the planned system.
|
|||
tpm03;
|
||||
tpm04;
|
||||
tpm05;
|
||||
|
||||
pi401;
|
||||
pi402;
|
||||
pi403;
|
||||
pi404;
|
||||
}
|
||||
|
||||
"poe-switch" -> dev01 [dir=both];
|
||||
|
@ -57,8 +62,16 @@ Below is a diagram of the planned system.
|
|||
"poe-switch" -> tpm04 [dir=both];
|
||||
"poe-switch" -> tpm05 [dir=both];
|
||||
|
||||
"poe-switch" -> gw [dir=both];
|
||||
publicnet -> gw [dir=both];
|
||||
"poe-switch" -> pi401 [dir=both];
|
||||
"poe-switch" -> pi402 [dir=both];
|
||||
"poe-switch" -> pi403 [dir=both];
|
||||
"poe-switch" -> pi404 [dir=both];
|
||||
|
||||
"poe-switch" -> haven [dir=both];
|
||||
"poe-switch" -> build [dir=both];
|
||||
|
||||
"poe-switch" -> controller [dir=both];
|
||||
publicnet -> controller [dir=both];
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
Networking (Notes)
|
||||
==================
|
||||
|
||||
**Note**: this document is just notes for me to plan for future work, basically
|
||||
a brain dump. It does not document the current state of the system, only
|
||||
documentsa an idea for one path forward.
|
||||
|
||||
Network layout
|
||||
--------------
|
||||
*The specifics here are very much subject to change.*
|
||||
|
||||
Right now, I have the network laid out on ``192.168.4.0/24``. The ``.1-.20``
|
||||
hosts are on DHCP; three IPs are assigned to meta/infra nodes, and the rest are
|
||||
reserved. Compute nodes are given the hostname ``nodeXX``, where ``XX`` is
|
||||
their host address. The limitation here is on available network ports: I only
|
||||
have 24 in this rack. I could add another switch, but I don't have a compelling
|
||||
reason to take up the space.
|
||||
|
||||
+ the compute blades are assigned the host addresses ``.1 - .10``.
|
||||
+ the RPi4 cluster is assigned the host addresses ``.11 - .14``.
|
||||
+ the secure services node is assigned the host address ``.252``, hostname ``haven01``.
|
||||
+ the build server is assigned the host address ``.253``, hostname ``build01``.
|
||||
+ the cluster controller and router is assigned the host address ``.254``,
|
||||
hostname ``controller``.
|
||||
|
||||
Infrastructure services
|
||||
-----------------------
|
||||
|
||||
+ I think the controller will have a TFTP/PXE boot server as well as run DHCP and
|
||||
DNS. I'll also run a `Tailscale <https://tailscale.com/>`_
|
||||
`subnet router <https://tailscale.com/kb/1019/subnets/>`_ here.
|
||||
|
||||
+ The build server is on the network just as a convenience; it's an Intel NUC
|
||||
that will be used as a development and staging system for infrastructure.
|
||||
|
||||
+ The haven system will get its own page, but it will own the identity
|
||||
management system as well as the secrets vault.
|
|
@ -60,7 +60,7 @@
|
|||
},
|
||||
{
|
||||
"destination": "/etc/netplan/10-network.yaml",
|
||||
"source": "files/network-dev.yaml",
|
||||
"source": "files/netplan-dev.yaml",
|
||||
"type": "file"
|
||||
},
|
||||
{
|
||||
|
|
|
@ -25,8 +25,6 @@
|
|||
# infrastructure systems #
|
||||
##########################
|
||||
|
||||
192.168.4.32 chaven01 # Zymbit D35 secure services system
|
||||
192.168.4.33 cbuild01 # build server
|
||||
|
||||
192.168.4.64 control # cluster controller and router
|
||||
192.168.4.65 cdev # cluster dev machine
|
||||
192.168.4.252 haven01 # Zymbit D35 secure services system
|
||||
192.168.4.253 build01 # build server
|
||||
192.168.4.254 controller # cluster controller and router
|
||||
|
|
Loading…
Reference in New Issue