My Profile Photo


Using liberty-minded opensource tools, and using them well

The role of a sysadmin in the age of the SAFE Network

What the future of administration might be if/when the SAFE Network becomes a global standard.


I’ve been wondering about this for some time. Before running across the SAFE Network and MaidSafe, I forsaw a very exciting future in Systems Administration. But now, not so much.

What changed? Well, for one, since there are no centralized servers on the SAFE Network, there wouldn’t be much of a place for an in-house SysAdmin who never sees the light of day for time spent working at the command line sshing into the Tokyo servers trying to fix the firewall config. That just won’t exist anymore. Poof!

Limiting the scope of our consideration to medium and large-sized companies, what would the role of a SysAdmin be in that organization? What would they do, and what would they be tasked with maintaining and/or implementing?

There are several aspects that I forsee a bright future for:

  • Integration/Deployment of Apps
  • App configuration management
  • Customizing/Branding
  • Cloning & Customizing Apps for in-house use only
  • Per-machine setup and maintenance
  • Permission management throughout the company
  • Connectivity Establishment
  • Farming Maintenance
  • Automated Tasks
  • Thin Clients
  • Routers & Connectivity Hardware
  • Name service maintenance
  • Datacenter transition from legacy to SAFE

Contrast those with reponsibilities that will be eliminated:

  • Physical Server Maintenance
  • Server Log Monitoring
  • Server Uptime
  • Service Daemon Configuration and Maintenance
  • Authentication Management
  • Virtual Machine Management
  • Network Architecture
  • VPN Access
  • Data Center Storage/Access
  • Process Management

There are some questionable paradigms that may or may not be relevant:

  • Docker

Wow! There’s alot of stuff here. As it should be, the whole paradigm is going to be changing. What should I be learning to prepare for it?

Self-hosted vs Cloud-based

There’s alot to be said for comparing MaidSafe technology to Cloud technology. They both institue off-site storage and a general release from the burden of maintaining infrastructure and bare metal. But there are some significant areas where they differ.

With a cloud hosting company, sure, you don’t have to manually check to see if the Cat5e cable fell out of the ethernet port, but you still have to be worried about system services. Services such as postfix for email. Or naming for apache for website deployment.

These will all be antiquated with the SAFE Network coming into fruition. The question remains though: “These services must still be provided. How are they to be provided and what role will a SysAdmin play in delivering these services?”

That I’m still not so sure of.


One awesome aspect of the SAFE Network is that DDoS attacks are mitigated because of the native caching mechanism that it employs. But that leads to another point as well. Where once there used to be massive data centers and servers trying their best to keep up with the demand of data, now there is only a network that autonomously does all of that and more. There is no need for hardware to have terabytes upon terabyes of data stored for access if needed. There is no need for servers to process requests for that data and to serve it as needed. That is all done by the network. There is no need to maintain the electricity for that setup. There’s no need to architecture redundancy for that setup. There’s no need to automate quality of service or load balancing.

It’s actually kind of sad, since back in the old days there was so much thought and effort that had to be put into that. It was almost, no, it was an artform.

Those days will all be over and the only thing that will matter is speed of the code that is written. The network can serve better than any server. The only restrictions will be the speed of the code and the bandwidth that you are paying for.

There’s no need for SysAdmins to scramble in order to amass more and more hardware. There will be no reason. The system not takes care of every request and serves it up faster than you can say abracadabara.


So what then of the employees? How do they access the web? There’s no corporate infrastructure outside of the SAFE Network, so they could theoretically access the ‘net with only a laptop and a login.

Here’s where the scope of a SysAdmin comes into play. In medium to large-sized companies, there has to be some sort of authentication management. A network will presumably have a shared private network of sorts, containing private data, but restricted to the users that are able to access it.

Talking about new hires and new fires seems the way to go (seeing as I am one of the two most recently). If a company has a private share, its employees must have the ability to access it. How is that ability granted? Glad you asked.

Well, by SysAdmins of course! There must be some person sharing the keys with the people who need to access the system, and as turnover happens, so do the SysAdmins update access permissions. No CEO would dream of being responsible for this sort of tedious maintenance, and anyone who has seen HR in action wouldn’t ever trust them with this kind of power.

Also, employees need computers to work, right? Who sets up those computers, the employees themselves? I would highly doubt it. The configuration of these computers are way too vital to be trusted to a common employee. That needs to be handled by the people who know what they’re doing.

Per-machine daemons

Which brings me to another point. Since the machine is being used on the SAFE Network and connecting to the SAFE Network, why shouldn’t there be SAFE Network Daemons? Hell, why not call them Saemons! Ok, that’s bad, let’s just go with SND’s.

These daemons, much like any other on a computer, are a constantly running process such that a computer, while connected to the SAFE Network, can store and process information locally. For instance, email.

Back in the good ol’ days, email was reserved for geeks and nerds who ran their own email servers. Typically these servers were part of their computer that they were working on regularly. There was no other machine to connect to when one wanted to check their email, they just pulled up their email - accessable on their local hard drive - to check what messages they had received in the time that they had neglected it. On a dedicated machine connected to the SAFEnet, they could have a constantly running daemon that queried their mounted filesystem for the email spool. If a message had been received, the daemon could notify the user that a message had been recieved, and the user could check it via the local application.

But what about an app run from the ‘net itself? Well, I just don’t have an answer to that.

Threat Mitigation

Luckily attack surface area will be so minute that many vulerabilities will just go away. The job aspect of a SysAdmin here has less and less relevance. There’s not much more than app vulerabilities and social engineering that can be exploited on the SAFE Network.

Devs have all of the power