The infrastructure that I have available is my OurCompose Nextcloud instance, and a local NAS server in the form of a FreeNAS appliance. I will also be subscribing to a seedbox provider for the upstream ingress/egress points for my torrents.
The reason that we are running a Nextcloud instance locally is so that it can federate remotely to my cloud Nextcloud instance without having to bog down that cloud instance with a ton of files.
The FreeNAS is responsible for the local Nextcloud instance. This is installed as a plugin. However, due to my awesome network setup, I can’t have the jail live on the default interface. I put it on my last available interface which I hooked up to the same VLAN that is responsible for my personal services that are accessible externally. In all reality though, this will only be accessible from the cloud-based instance via a reverse proxy but we’ll go into that setup later.
The install can’t be done via the GUI, since the simple setup doesn’t allow for setting a custom VNET interface, and the advanced configuration kept throwing an error:
please check that the dataset is mounted, which even my google-fu wasn’t able to fix.
So I tried to run the simple setup in the Shell with the custom VNET interface, which worked like a charm:
# iocage fetch --plugins dhcp=on vnet=on vnet_default_interface=em0 [...] Type the number of the desired plugin Press [Enter] or type EXIT to quit: 10 [...] Admin Portal: http://<IP address>
So, true to form, Nextcloud wants to have its “trusted_domains” configuration setup. So, we go into the Jail’s Shell and set that up:
# pkg install nano # nano /usr/local/www/nextcloud/config/config.php [...] 'trusted_domains' => array( 0 => 'localhost', 1 => '<IP Address>', 2 => '<local DNS>', 2 => '<external DNS>', ), [...]
I also added the DNS entry, since it gets populated in my local DNS by the DHCP registration on the VLAN. Since it is just PHP code, it doesn’t require a restart of any service. I just refreshed my web browser so I was able to get dropped to a login screen. Unfortunately, the output does not give the admin password, that needs to be retrieved from the “Post Install Notes” section on the now-listed jail on the Plugins page in the GUI.
Luckily the VLAN setup should be the exact same, since it is going on the VLAN that my other externally-available services are on. Since this interface is already set up and the incoming switch forwards it to the same port, I just had it connect automatically and get its IP address from DHCP. Voila, what a miracle.
Squid is still setup for the reverse proxy, which is what I would have used if it supported multiple certs on different ports. However, it only supports 1 SSL cert, which is stupid. So it looks like I have to go with HAProxy.
First I downloaded and installed it with PFSense’s packaging. Then I created the backend. I only had to fill out the Name, and Server list. Everything else was default. For the frontend, it was the Name, Description, External Address, Default Backend, and Certificate for SSL Offloading. I also had to fill out the ACL’s here too, which was weird, having filled out the Default Backend already. I chose for the Access Control Lists to have an ACL rule for the ‘Host starts with:’ as the Expression, and my DNS name as the Value. For the Actions, I then selected “Use Backend” as the Action, with the name of the ACL I defined above, with the backend as Nextcloud. Then for the Settings tab, I set the Max connections to 1000, and the Max SSL Diffie-Hellman size (4096). Obviously I also enabled it at the very end.
After that, I went into the Rules to allow traffic through since I set the port to 8443 in the frontend (since 80 and 443 were already taken). I added a Pass rule to the WAN interface for TCP traffic, with a source of ‘any’ (I want to be able to get there from internal networks too for testing), and a destination of the WAN address port 8443.
It didn’t work right off the bat, so I turned on Logging in the Settings page.
I set the ‘Remote syslog host’ to
I also turned on logging for that Frontend to ‘Raise level for errors’, and ‘Detailed logging’.
It turns out that the health check being on for the Nextcloud server wasn’t returning healthy, so I just disabled it and was able to get through to the login screen just fine. However, there was one more adjustment that needed to be made. Since the HAProxy in front of it was offloading SSL traffic, the Nextcloud configuration had to have the following line added:
'overwriteprotocol' => 'https';
I am able to get to the admin screen now from the exterior domain. On to the next step!
This is the fun part.
Now, the Nextcloud server that’s hosted on OurCompose is out in the cloud that I have admin access to, and I also have access to my internal one.
On both of them, I checked that the Federation app was installed and configured, and went to the admin settings to the Federation section.
In the “Trusted servers” section, I made sure to enable “Add server automatically once a federated share was created successfully”.
I also manually added the trusted servers.
On my local server, I had to put the full URI for my OurCompose server, since it had a subdirectory.
I had to put
https://<dns>/nextcloud before it was able to recognize it.
Once I had that set up, I went to go create the share.
It was easy enough to do.
I just had to specify that I was sharing it with
Keep in mind that
includes `https://`; it's the _full_ URL.
Once I had created the share, I had to accept the share on the remote instance. This was done by accepting the prompt from the notifications tab confirming the share. By default, it shared with all of the permissions that I needed it to have. However, I wasn’t able to see it, since I had logged into the local instance using the LAN address. Per the docs:
Your Nextcloud server creates the share link from the URL that you used to log into the server, so make sure that you log into your server using a URL that is accessible to your users. For example, if you log in via its LAN IP address, such as http://192.168.10.50, then your share URL will be something like http://192.168.10.50/nextcloud/index.php/s/jWfCfTVztGlWTJe, which is not accessible outside of your LAN. This also applies to using the server name; for access outside of your LAN you need to use a fully-qualified domain name such as http://myserver.example.com, rather than http://myserver.
So I had to log into the instance with the external address. That unearthed a couple of issues that I have addressed above.
The share address will contain all of the idiosyncrasies, like the subdirectory or the port number if necessary.
Once I did that, I was able to successfully test creating shares back and forth between both instances.
There are plenty of seedbox providers available, most notably Feral and Whatbox. I chose between those two, since they both provide support over IRC, have Transmission available, and have SSH access to their accounts.
Specifically for Transmission, I made sure to install the Remote Client version, in order to access the remote. This works just well to add torrents. And a seedbox is SO much faster than anything at home. However, I do want to store it here so…
Here’s where the magic happens. I will now be creating a directory on FreeNAS to share over the local network, and to attach to the Nextcloud jail, in order to federate with the externally-exposed Nextcloud instance.
There are a couple of requirements here. For starters, let’s take a look at the media directories that is set up:
. ├── Books ├── Music ├── Pics └── Vids ├── Movies ├── Protected ├── Shows └── Ungrouped
This is because most of the media that I’ll be using will be looking for specific things in distinct directories, and this setup seems to work just fine.
I created another volume in FreeNAS (
freen54-media), Then I added the volume into the jail mounted at
I can’t add it in the actual nextcloud directory, because the filename is too long for
FreeNAS rsync job
As a practical implementation, in order to use rsync over SSH in the built-in rsync tasks, you have to user a user that has ssh keys. So I used my own user, and made my group own the
freen54-media directory, and made that directory group-writable, and make sure that the directories will always have the correct permissions like so:
# chown -R www:smacz /mnt/volume-default/freen54-media # chmod -R u=rwX,g=rwX,o=rX /mnt/volume-default/freen54-media # chmod -R u+s,g+s,o+s /mnt/volume-default/freen54-media
Then create the Rsync Task in the UI with the following settings:
- Path: /mnt/volume-default/freen54-media/ - User: smacz - Remote Host: email@example.com - Rsync mode: SSH - Remote SSH Port: 22 - Remote Path: <remote path>/ - Direction: Pull - Short Description: Pull Media from Seedbox - Schedule: Hourly - Options: - Recursive - Compress - Delay Updates
Two notes on this here. Make sure to put trailing slashes on both directories so that it doesn’t create the directory inside of where you would like it to be. Also, setting the
setgid bits on the directories means that it can’t set the times or ownership on those directories, so the
Recursive options will fail if they’re selected.
Nextcloud Directory Access
In order to get to that directory (since we set up the permissions correctly), I enable the External Storage addons I added the Media directory as a local filesystem that’s mounted at
At this point, all the files just showed up. It took awhile to get them to scan that directory, but it got there eventually.
I’ve gone over federation before, but this setup allows me to only share subsections of my media, or the entire media setup with a single share. This is incredibly powerful.
FreeNAS Network Share
There is a network share that was already set up in another blog post that exposed this over WebDAV to access the media, and all I did is add another share to that setup. Yep, it was just that simple.
Final Media Consolidation
So now I have a location that I can access over the internal network shares with my phone and laptop on WiFi, all of my torrents pulled down from my seedbox and stored on my local FreeNAS, and external, shareable, federated access with the shared Nextcloud setup.
The canonical source of my Media is now the NAS. Everything is stored there. So anything that I want to consolidate down can be placed there and be accessed in any other way that it could have before.
That’s pretty sweet. Now, to start gathering everything together…