Securing server environments – part II – Networking

securing your network
Juha Jurvanen’s thoughts on server security 
Second blog post about securing your server operations. Once, again, it’s not the gospel of all security issues but I guess it’s a start and possible checklist to use.
Now, if we are more or less satisfied with  the physical aspect of the data center it’s time to start thinking about the network design and firewalls. Threr’s also a follow up on Operating systems here
Depending on your needs, say if you have various branch offices or if all your server and users are in the same location, the network design will vary.
First of all, make sure you have a decent firewall. It doesn’t have to be that expensive and you can even use some free  Linux distribution (such as Smoothwall) and set up a small server  to act a as the firewall. There are people out there stating that with IPv6 maybe firewalls will be unnecessary  but I highly doubt that. I have no idea how they’ve come to that conclusion so I’ll just leave it at that. Firewall will be around for a long time.
Just don’t use an old workstation because usually , you can’t set up RAD5 functionality on the disk volumes and you really want that because hard drives will eventually fail on that server also.
You could also have a firewall that’s actually installed on a DVD / CD ROM thus disabling the ability for any attacker  that has gained control to be able to modify the rules. The drawback is that if you want to reconfigure, you need to create a new DVDCD each time to have get the new rules in there.
You also want to have an extra power supply for redundancy  but that’s about it what you need to get a decent firewall up & running .
There are of course quite a few appliances out there with firewalls pre installed but that’s what they basically, small server with some kind of software to allow / deny network traffic on specific port.
The actual firewall rules aren’t that complicated to set and think through really, now days the GUI in the firewall software is usually quite self explanatory and even if you use the more high end firewalls such as Clavister or Firewall 1 and so on , it doesn’t take that long to understand them really.
You decide  which hosts (servers) on the inside of the firewall that should be reachable from the outside world basically and on what ports. Usually on TCP since UDP isn’t really used for communications  over the internet.  The difference  between two (TCP versus UDP)  is that a TCP connection is controlled and verified with a acknowledgment that it has reached to intended server that the server and the host/user trying to reach a service is actually communicating while UDP is basically the host/workstation sending off a network packet and hoping for the best that it reaches it’s destination without any response from the intended server.
Setting up the firewall rules should consist of a few things. Best practice is usually to have a DMZ (Demilitarized Zone ) where you basically put your server that should be reachable from the Internet (commonly this is mail servers, web servers, VPN (Virtual Private Network) servers and FTP (File Transfer Protocol) servers.
The rules are the set up so that the server on the DMZ are only allowed to communicate with the servers behind the firewall only the specific ports decided by you.
Unfortunately, some servers do require quite a few ports to be opened from the DMZ such as Microsoft Exchange server that need to validate with the Domain Controllers and stuff. .
There’s no law saying you actually must have a DMZ but the idea behind it is that if one of the servers on the DMZ would be hacked, the attacker wouldn’t reach the other servers . I’m not that convinced really. Once a hacker gains access to any  of your serves, I’m fairly convinced  that you used the same passwords you use on the server behind firewall somewhere in on the server thus giving the information on how to log in to rest of the system. People are people.
You just need to try as hard as possible not to have your servers hacked. I’ll get back to that topic further down the line in my little blog series.
So, regardless of if you decide to use a DMZ or not, you still not have backups of you servers and sometimes, setting servers on the DMZ can cause a bit of hassle on how to create good  backups.
One case I remember was, a large Swedish website that  took backups of their system to a local hard drive on the server on the DMZ with a script.
Usually, all kinds of scripts and backups need to have administrative rights and if your are using a scripts, most likely you’ll put the administrator / root password in the script file.
This is what they did and sure enough, someone found the script on the server from the Internet and they could gain access to the server simply by using the passwords in the script..
If I recall correctly, the script file was even indexed by Google actually so you should actually be a bit careful with what you allow Google and search engines to see on your servers. We’ll get back to that also further down the line.
Needless to say, once they were hacked they thought things through a bit more.
I’d say the having a backup server behind the firewall and pulling the backup from it with some kind of backup agents is a better solution instead of the server sending data to a behind the firewall.
Another  solution  is to have a look at some kind of online backup solution and thus sending the data outside your data center automatically, such as SunGard’s Vytal Vault, The  Online Backup Company or any of the online backup service providers really. Just take your pick
Just make sure that you use a good encryption password and that the backup services don’t require you to run them as a user. It’s better to be able to start the with a local system service that can’t be used for logins. If you have a backup software that requires user credentials, cerate a specific user for them and name the user something not that obvious as for instance “backup” Just create a user such as John Smith , a regular Joe and grant him only the necessary rights.
Have your servers on a backbone network by themselves and your workstations on anther, thus giving the servers the network bandwidth they need to communicate with each other and not being bothered with unnecessary broadcast traffic from your workstations,
Make sure all of your network cards in your servers are set to use maximum speed. Don’t trust the auto negotiation . Configure the specifically to use for instance 1 Gbit FULL Duplex and to use big frames (15000 packets size, just make sure that both your serevrs and switches support this ) .
If you have the configuration  ability also consider using teamed network thus teaming two (or more) network cards . The idea is that you get higher performance and should one of the network cards fail, the other ones would still be running and providing service to your users.  An easy way to create redundancy.
Speaking of redundancy, you should also use a number of switches and if possible, buy switches with two power supplies and the ability create redundant networks. You might also want to consider to configure your switch  ports to use 1 Gbits FULL Duplex with large frames , if supported and to have the switches monitored  with SNMP to a specific monitoring server.
Don’t use “public” as a SNMP community, name it something else and also be sure that SNMP traffic isn’t broadcasted all over the place, Have it sent to a dedicated monitoring service. NSMP is not a very secure protocol
Always have at least one spare switch in your data center, and if possible , a spare , pre confgured firewall. The more expensive firewalls also have the ability two cerate failovers and stuff but if you’re looking to create a fairly cost efficient networking environment, you probably won’t have that in your arsenal.
When it comes to the firewall rules, a few tips that might useful.
Decide for instance which server from he inside should be able to make DNS queries and have all of you workstations use that server as their DNS server. The drawback is of course that should that DNS server fail for some reason, no one will able to reach anything really but the upside is that you reduce the traffic through the firewall and gain better performance and a far better control of the network traffic for your workstations and get an easy and cheap way of blocking or redirecting traffic for your workstations to websites you don’t want them to visit.
In a Windows environment , you could also create a proxy setting and have all traffic flow through it and giving an excellent control of all traffic. Have a look at this link on how to accomplish that
http://www.tomshardware.co.uk/forum/194827-36-restrict-internet-access-group-policy
Some firewall administrator also block all kinds of outbound traffic to the outside world that isn’t explicitly required. This can be an administrative nightmare but in theory it would tighten your security and blocking for instance trojans  trying to communicate but now days, quite a the trojans do use standard ports so I’m not sure one really gains that much with such a configuration. Of curse some trojans would be blocked so it can’t hurt but there is always the risk of getting more administrative work if something need to be opened for a new software.
Never allow firewall administration from the outside world and you should even consider only to allow firewall administration from a specific workstation or even just localy. on the firewall.
Don’t have your firewall answering to PING traffic.
If your firewall has different settings for blocking port scans or SYN flooding  and so on, use them. They are there for a reason.
If you really want to think about network traffic and security, you might even consider setting up a network perimeter system such as SNORT .
It’s not that difficult really but it does require some skill and separate hardware  and the downside can be various performance issues
Here’s a link to a SNORT cheatsheet by Tim Kiery at Comparitech when looking at SNORT that I got as a tip by email tthe other day.
VPN is still widely used for letting youTr users connect to your internal resources and from time to time they work great. Unfortunately , performance is often an issue, DNS handling and sometimes, also licensing costs. Nowadays , there are other kinds of solutions such as Sharepoint server, RD Web and cloud solutions that actually (yes, I know, basically I am a Windows guy ) .
If you still want VPN and keep your costs down you could have a look at OpenVPN for instance. Iäve used at few customer sites and it works and it pretty fast really.
Wireless networks and access to networks should also be considered carefully.
  1. A few pointers, use WPA2 and do not use the EasySetup features that only require a simple PIN code. WEP and WPA are too easy to decrypt and the EasySetup features are basically only 8 numbers that an attacker needs to guess and two of them are just control numbers so  there’ actually only a 6 digit code to crack. That’s done in only a few hours.
  2. Hide the broadcast name. It can still be found but what you can’t see is more difficult to find, right ?
  3. You should separate the WiFi access from the rest of the network also and not have mobile phones on the same networks as servers and workstations.
  4. In 99,9 % of the times you allow WifI access to your end user mobile phones it’s to give them the ability to reach Internet services and not to reach anything  on your corporate network. If it’s for laptop access, have a separate WiFI network for them and one for mobile phones. It’s not that expensive to set up but it does give your laptop users all the bandwidth for work, not for Spotify.
  5. If you really want to restrict network access, most switches and wireless routers have the ability to set up MAC address for filtering and access but the downside is of course it’s an administrative nightmare if you many devices  and users. If you enable those features you’ll have to keep track of each MAC address on your network which can be time consuming. It does tighten your securiy of course.
  6. For external guests you should also have a separate  guest network that has no connection with your server  networks or the workstations network . Remember, even if the salesperson or consultant seems reliable , you have absolutely no way of know if their computers has been infected with a virus or if they are up to no good.
  7. Don’t have computers in the reception connected to the corporate network such as guest access systems. There  is absolutely no need for external visitors to be able to browse your internal network.
  8. If you have any kind of guest access system , make sure it is kiosk modes with no way to break out the application shell that you’ve decided for the guests. Plug all USB contacts and any other types of input mechanisms.
  9. Don’t have all of the network sockets plugged in just because of laziness. If some one plugs in a laptop or computer and you haven’t been informed about, they shouldn’t be able to start surfing or browsing your network unless you say so. Of course, there are always ways around that , for instance just borrow that ethernet cable from that other PC and son but it does make it a bit more difficult anyway.
  10. Always have a good monitoring software running and checking your network for new devices. If you start seeing devices with MAC addresses with 00-00-00-BE-50-00-DE-AD .. well. its too late . you’re toast. Personally I favor SpiceWorks but there are lots of monitoring software solutions out there. Take your pick. Basically, you need to have a clue of what’s going on your network  and , even mores so. You need to know why. You need to monitor bandwidth usage and also have monitoring points on your network , both from internal point and from external.
// Juha “Juffe” Jurvanen
Senior consultant in backup, IT security, server operations and cloud

By Juha Jurvanen @ JufCorp