Saturday, 16 August 2014

How To Setup Port Forwarding

If you are considering hosting services (web service, FTP service, game server) on your home network computers accessible from the Internet, then port forwarding is a prerequisite. Port forwarding, also called port mapping, is a networking process, NAT/PAT, to allow a remote computer, from the internet, to be redirected to a port listening on a private network where a service is running.The NAT (Network Address Translation) / PAT (Port Address Translation) mechanism is enabled at the router level. Let’s look at the picture below as an example.



The “red” line represents the Internet traffic. A workstation makes a connection to my WAN IP address provided by my ISP, on the port 3389 (RDP). At the router level, the port value is checked against the existing port forwarding rules.
Once the rule is found, the router will “forward” the request to the local IP address (192.168.110.2) associated with the port, illustrated by “blue” line.
Prerequisites:
Have a Dynamic DNS setup.
Static IP on the servers or workstations hosting the services.
Workstations have their firewall is disabled or add an exception rule.
Steps Overview
1- Identify the host LAN IP address, from the command line or via the network interface properties
2- Identify the port listener associated with the service
3- Set up the port forwarding rule in the router
4- Connect to the service from a remote client

1- Get the LAN IP addresses

From the Command line
“Run” > “Start” > “cmd”
Type ipconfig at the prompt and hit the ENTER key.
From the output, the value that we are looking for is the “IPv4 Address”. Write down the value.
Here, I only care about the workstation IP address, thus, “ipconfig”. The switch /all returns a lot more information, such as “Default Gateway”, DNS servers, MAC address, DHCP server, leases.




From the Network Adapter Settings. Start > Control Panel > Network and Sharing Center > Change Adapter settings (#1)


Right Click on the active Network interface (#1) and “Status” (#2)


Click on “Details” (#1) and (#2) is the value for the local IP address


Identify the port.

Every server type application utilizes a port which is a unique value.
When a service is setup to accept connections from clients over a network, the architecture is called “client-server” model.
For a client to connect, and authenticates, to a service (hosted on a networked device), two pieces of information are required. The host IP address and the listening port.
For instance, terminal services or Remote Desktop Protocol listens on port 3389, Web Server (IIS or Apache) on port 80, FTP service on port 21.
The last information good to know is the protocol, TCP, UDP or both. When in doubt, select “Both” or “TCP/UDP”.

Setting up the port forwarding rule.

The set up occurs within the router. From the main menu, look for “Applications” and “Port Forwarding ” or “Port Range Forward”.






Based on my router, here are the detailed explanation for each value from top to bottom.
 
Sequence Number”: This is just an incremental value
Rule Name”: Mostly for record purposes. Come up with something explanatory in case the rule needs to be updated.
Rule Enable”: In some cases, there may be a need to temporary disable the port forwarding rule.
External Interface“: This field is specific to my router. It supports a Mobile WAN as a failover. In the event my internet service is unavailable, I could connect a USB mobile broadband and keep my network online.
Protocol“: Choices are, TCP, UDP. TCP/UDP. In doubt, choose TCP/UDP.
External Port Range“: Ordinary, the external port range matches the Internal port range. However for security purposes, we may want to change the external port value. I will discuss that point in the best practices section. What’s important for now is to understand the port forwarding concept.
Internal IP“: Self explanatory, enter here the IP address of the application or service host.
Internal Port Range“: Port value of the service running on the internal IP host. The services were going to make available and port number associated are, FTP 21, IIS 80 and RDP 3389.

RDP port forwarding rule
RDP port forwarding rule
 
web-setup
Web Server forwarding rule.
 
FTP port forwarding rule.
FTP port forwarding rule.

Connection from a remote client to the local service.

Remote Desktop Protocol:
From the remote computer, start the RDP client.
In Windows 7:
“Start” > “Run” > “mstsc”
[Alternative] “Start” > “All Programs” > “Accessories” > “Remote Desktop Connection”
Enter the computer hostname or public IP address (#1), click “Connect” (#2). On the next dialog box, (#3) is optional, check it if unsure then click “Connect” (#4)

 rdp-connection

If you are getting the authentication box, it means that the connection has been established and the port forwarding is functional. Enter your credentials (#1), it is NOT recommended to check “Remember my credentials” and this is valid for any form of authentication. If you happened to leave your computer unattended, a third party could gain access without the need to enter the username/password combo.
Click “OK” (#3) to authenticate and access the resource.

rdp-credentials

Success! I was able to establish a remote desktop control into my workstation from the internet.

rdp-connection-established

Best practices:

Setting up port forwarding or port mapping will open up your network to the internet. Access to your computer resources from the internet requires two pieces of information, the WAN IP address or name server and the port number.
You do not want to advertize your IP address, in forums, social media networks.
You do not want to use the application or service default port as the incoming port on the WAN side. It is easy to guess the service based on the port. Once I know what service is behind that port, I know what client I would use and try to gain access to that resource.
In my example, if the port 3389 is opened, it is likely that the computer accepts RDP connections, therefore I would use a RDP client to connect. I still need to authenticate before I can access the workstation.
The idea is to pick a random port for the WAN incoming request. For this tutorial, I set the RDP port value to “4000”. Although the port is opened from the internet, it would take a lot of guessing to find out what is the real service I am running on the home network side.
 
change-port
Normally, to RDP into a computer there is no need to specify the port, but since it was changed from 3389, then it must be entered as shown in the screenshot below.
 
rdp-with-port
rdp-connected

Conclusion:

Once you understand the port forwarding or port mapping concept, your data and computer resources can be accessed from anywhere as long as an internet connection is available.
Keep in mind that if your resources are accessible from the outside, you want to monitor your server(s) for unusual behavior, such as slowness, higher bandwidth usage, increase of disk space usage, connections logs (from the router), security logs (from Windows event viewer) and keep your antivirus up to date.




 

Thursday, 24 October 2013

Natting - A little bit about Hair-Pin NAT ...

In the below network topology a web server behind a router is on private IP address space, and the router performs NAT to forward traffic to its public IP address to the web server behind it.

Hairpin nat 1.png

The NAT configuration would look like below:


/ip firewall nat
add chain=dstnat dst-address=1.1.1.1 protocol=tcp dst-port=80 \
  action=dst-nat to-address=192.168.1.2
add chain=srcnat out-interface=WAN action=masquerade

When a client out on the Internet with IP address 2.2.2.2 establishes a connection to the web server, the router performs NAT as configured.
Hairpin nat 2 new.png

  1. the client sends a packet with a source IP address of 2.2.2.2 to a destination IP address of 1.1.1.1 on port tcp/80 to request some web resource.
  2. the router destination NATs the packet to 192.168.1.2 and replaces the destination IP address in the packet accordingly. The source IP address stays the same: 2.2.2.2.
  3. the server replies to the client's request and the reply packet has a source IP address of 192.168.1.2 and a destination IP address of 2.2.2.2.
  4. the router determines that the packet is part of a previous connection and undoes the destination NAT, and puts the original destination IP address into the source IP address field. The destination IP address is 2.2.2.2, and the source IP address is 1.1.1.1.

The client receives the reply packet it expects, and the connection is established.
When a client on the same internal network as the web server requests a connection to the web server's public IP address, the connection breaks.

Hairpin nat 3.png

  1. the client sends a packet with a source IP address of 192.168.1.10 to a destination IP address of 1.1.1.1 on port tcp/80 to request some web resource.
  2. the router destination NATs the packet to 192.168.1.2 and replaces the destination IP address in the packet accordingly. The source IP address stays the same: 192.168.1.10.
  3. the server replies to the client's request. However, the source IP address of the request is on the same subnet as the web server. The web server does not send the reply back to the router, but sends it back directly to 192.168.1.10 with a source IP address in the reply of 192.168.1.2.

The client receives the reply packet, but it discards it because it expects a packet back from 1.1.1.1, and not from 192.168.1.2. As far as the client is concerned the packet is invalid and not related to any connection the client previously attempted to establish.
To fix the issue, an additional NAT rule needs to be introduced on the router to enforce that all reply traffic flows through the router, despite the client and server being on the same subnet. The rule below is very specific to only apply to the traffic that the issue could occur with - if there are many servers the issue occurs with, the rule could be made broader to save having one such exception per forwarded service.

/ip firewall nat
add chain=srcnat src-address=192.168.1.0/24 \
  dst-address=192.168.1.2 protocol=tcp dst-port=80 \
  out-interface=LAN action=masquerade

Hairpin nat 4.png

With that additional rule, the flow now changes:

  1. the client sends a packet with a source IP address of 192.168.1.10 to a destination IP address of 1.1.1.1 on port tcp/80 to request some web resource.
  2. the router destination NATs the packet to 192.168.1.2 and replaces the destination IP address in the packet accordingly. It also source NATs the packet and replaces the source IP address in the packet with the IP address on its LAN interface. The destination IP address is 192.168.1.2, and the source IP address is 192.168.1.1.
  3. the web server replies to the request and sends the reply with a source IP address of 192.168.1.2 back to the router's LAN interface IP address of 192.168.1.1.
  4. the router determines that the packet is part of a previous connection and undoes both the source and destination NAT, and puts the original destination IP address of 1.1.1.1 into the source IP address field, and the original source IP address of 192.168.1.10 into the destination IP address field.

The client receives the reply packet it expects, and the connection is established.
However, the web server only ever sees a source IP address of 192.168.1.1 for all requests from internal clients regardless of the internal client's real IP address. There is no way to avoid this without either using a router that can do application level DNS inspection and can rewrite A records accordingly, or a split DNS server that serves the internal clients the internal server IP address and external clients the external server IP address.
This is called - among other terms - hair pin NAT because the traffic flow has clients enter the router through the same interface it leaves through, which when drawn looks like a hair pin.


Reference from MicroTik Docs web source.

Friday, 25 January 2013

5 Ways to Get Your Blog Indexed by Google in 24 Hours

We all know that content is king and that if you keep blogging… if you keep doing what you love… the traffic and the money will follow suit. While that’s partially true, there is also things that you can do to:
  • Index your newly launched blog fast by major Search Engines
  • Increase traffic to your blog
  • Improve your SERPs (Search Engine Result Positions)
Why wait right? Content can be king but waiting around for traffic to come by itself is not a good way to start blogging. So let’s start…

Getting Indexed

Let’s say you launched a blog today and want it on Google’s results tomorrow. Can this be done? Yes.
Easiest way to get indexed by major Search Engines is to get mentioned by established blogs. This usually will get your blog indexed within 24 hours. But since we are new (i.e the newly launched blog of ours) I don’t think any blogger want to mention it. So instead of begging other bloggers to notice your newly launched blog, you just have to figure out other ways of getting indexed by Google fast. Can it be done? Absolutely! (All it takes a little effort on your side).

1. Blog Communities

There are few blog related community portals that have a very good rankings in Google and other Major Search Engines Results, they are: MyBlogLog, BlogCatalog, Blogged and NetworkedBlogs, particularly MyBlogLog. This means that if you get your blog on these blog communities, Google will have no other choice but to index your blog. So, go ahead and register for an account on these communities and list your blog on it. Once you are done you will have a page like this, this and this.

What to pay attention: Your blog’s description (have a proper write-up), keywords & tags (add related keywords and tags to your listing, this will be used by other members to find your blog), branding (put your logo, avatars, screenshots etc. have a consistent branding everywhere), and list your blog in the correct category.

2. Site Valuation & Stats Sites

Some of those How Much Your Site Worth? sites have a good ranking in Search Engines. All you need to do is to go there and check how much your site worth. This would create a special page for your blog (like this) and consecutively it would be indexed by Google. Here is a list of worthy sites: WebsiteOutlook, StatBrain, CubeStat, WebTrafficAgents, BuiltWith, WhoIs, QuarkBase, URLfan and AboutTheDomain.

3. Feed Aggregators

List your blog’s feed in these feed aggregators Feed-Squirrel, OctoFinder, FeedAdage. Once you have submitted your feed to these sites, they will keep track of your newly published posts and index them in their site. Whenever someone clicks on the blog post title, he/she will be redirected to your original blog post sending you free traffic and getting your latest posts indexed by Google.

4. Social Sites

Registering account on Social Sites with the same username as your blog’s URL is very effective in getting your blog indexed by Search Engines. Especially for those targeted keywords.
For example, if your blog’s name is WhiteElephant, it’s a good practice to register the same username at twitter as @WhiteElephant, and to create a page in Facebook at www.facebook.com/WhiteElephant. Having a consistent keyword-username on all major Social Sites will help get your blog indexed faster, and at a later stage it will also help build a “brand” for your blog.
So, get account on major Social Sites for your newly launched blog, namely: Twitter, Facebook (create a page for your blog), Digg, StumbleUpon, Delicious etc. By the way, it’s a good pratice to create a separate Social Sites account for each of your projects. This way you can stay focused and post messages that are related to your project. In the long run, this will help build a community that are like-minded around your project.
Note from Darren: it’s worth nothing that many social media sites (like Twitter) use no follow tags on links which means the links don’t really help with SEO. Having said this – it’s still worth getting pages for your keywords/brand as these pages can rank in and of themselves in Google and can help you to have control over numerous search results for the same keyword.

5. Misc Sites

Squidoo is a community website that allows people to create pages (called “lenses”) on various topics. Creating a topic that is related to your blog and then including your feed in that page would help your blog get indexed by Search Engines. Squidoo used to have a really good ranking in Google results, but not so much today. But it’s still ranks well and it shouldn’t be neglected.
ChangeDetection is a website that monitors sites for changes. When you monitor a particular site using ChangeDetection, it will ask you whether you want the notices to be public or private. If you say public, it will be published in their news section. For example; AdesBlog.com got an update today, type of update: text additions etc. This of course will get picked up by Search Engines and Search Engines in return will index your blog.
Technorati is a search engine for searching blogs. According to Wikipedia, as of June 2008, Technorati was indexing 112.8 million blogs and over 250 million pieces of tagged social media. It’s a dying breed, but not just dead yet. You have to definitely register for an account and get your blog listed on Technorati.
That’s it. Once you are done with creating accounts and submitting your newly launched blog in the above mentioned sites, you should see your blog in Google’s Search Results within 24 hours. Most of the time it will appear within the next few hours only.
Lastly, getting indexed is one thing but sustaining that traffic is another. And this is where the Content is King phrase should truly be emphasized. Because, without a good and valuable content, all your effort will be just wasted.
I hope you have found this post useful.

Reference from : Mr. Abdylas Tynyshov (Ades) is a full-time blogger based in Kuala Lumpur, Malaysia

Wednesday, 2 January 2013

How to Get Google to Index Your New Website & Blog Quickly

How to Get Google to Index Your New Website & Blog Quickly

Whenever you create a new website or blog for your business, the first thing you probably want to happen is have people find it. And, of course, one of the ways you hope they will find it is through search. But typically, you have to wait around for the Googlebot to crawl your website and add it (or your newest content) to the Google index.
 
So the question is: how do you ensure this happens as quickly as possible? Here are the basics of how website content is crawled and indexed, plus some great ways to get the Googlebot to your website or blog to index your content sooner rather than later.

What is Googlebot, Crawling, and Indexing?

What is googlebot?
Before we get started on some good tips to attract the Googlebot to your site, let’s start with what the Googlebot is, plus the difference between indexing and crawling.
  • The Googlebot is simply the search bot software that Google sends out to collect information about documents on the web to add to Google’s searchable index.
  • Crawling is the process where the Googlebot goes around from website to website, finding new and updated information to report back to Google. The Googlebot finds what to crawl using links.
  • Indexing is the processing of the information gathered by the Googlebot from its crawling activities. Once documents are processed, they are added to Google’s searchable index if they are determined to be quality content. During indexing, the Googlebot processes the words on a page and where those words are located. Information such as title tags and ALT attributes are also analyzed during indexing.
So how does the Googlebot find new content on the web such as new websites, blogs, pages, etc.? It starts with web pages captured during previous crawl processes and adds in sitemap data provided by webmasters. As it browses web pages previously crawled, it will detect links upon those pages to add to the list of pages to be crawled. If you want more details, you can read about them in Webmaster Tools Help.

Hence, new content on the web is discovered through sitemaps and links. Now we’ll take a look at how to get sitemaps on your website and links to it that will help the Googlebot discover new websites, blogs, and content.

How to Get Your New Website or Blog Discovered

So how can you get your new website discovered by the Googlebot? Here are some great ways. The best part is that some of the following will help you get referral traffic to your new website too!
  • Create a Sitemap – A sitemap is an XML document on your website’s server that basically lists each page on your website. It tells search engines when new pages have been added and how often to check back for changes on specific pages. For example, you might want a search engine to come back and check your homepage daily for new products, news items, and other new content. If your website is built on WordPress, you can install the Google XML Sitemaps plugin and have it automatically create and update your sitemap for you as well as submit it to search engines. You can also use tools such as the XML Sitemaps Generator.
  • Submit Sitemap to Google Webmaster Tools – The first place you should take your sitemap for a new website is Google Webmaster Tools. If you don’t already have one, simply create a free Google Account, then sign up for Webmaster Tools. Add your new site to Webmaster Tools, then go to Optimization > Sitemaps and add the link to your website’s sitemap to Webmaster Tools to notify Google about it and the pages you have already published. For extra credit, create an account with Bing and submit your sitemap to them via their Webmaster Tools.
  • Install Google Analytics – You’ll want to do this for tracking purposes regardless, but it certainly might give Google the heads up that a new website is on the horizon.
  • Submit Website URL to Search Engines – Some people suggest that you don’t do this simply because there are many other ways to get a search engine’s crawler to your website. But it only takes a moment, and it certainly doesn’t hurt things. So submit your website URL to Google by signing into your Google Account and going to the Submit URL option in Webmaster Tools. For extra credit, submit your site to Bing. You can use the anonymous tool to submit URL’s below the Webmaster Tools Sign In – this will also submit it to Yahoo.
  • Create or Update Social Profiles – As mentioned previously, crawlers get to your site via links. One way to get some quick links is by creating social networking profiles for your new website or adding a link to your new website to pre-existing profiles. This includes Twitter profiles, Facebook pages, Google+ profiles or pages, LinkedIn profiles or company pages, Pinterest profiles, and YouTube channels.
  • Share Your New Website Link – Once you have added your new website link to a new or pre-existing social profile, share it in a status update on those networks. While these links are nofollow, they will still alert search engines that are tracking social signals. For Pinterest, pin an image from the website and for YouTube, create a video introducing your new website and include a link to it in the video’s description.
  • Bookmark It – Use quality social bookmarking sites like Delicious and Stumble Upon.
  • Create Offsite Content – Again, to help in the link building process, get some more links to your new website by creating offsite content such as submitting guest posts to blogs in your niche, articles to quality article directories, and press releases to services that offer SEO optimization and distribution. Please note this is about quality content from quality sites – you don’t want spammy content from spammy sites because that just tells Google that your website is spammy.

How to Get Your New Blog Discovered

So what if your new website is a blog? Then in additional to all of the above options, you can also do the following to help get it found by Google.
  • Setup Your RSS with Feedburner – Feedburner is Google’s own RSS management tool. Sign up or in to your Google account and submit your feed with Feedburner by copying your blog’s URL or RSS feed URL into the “Burn a feed” field. In addition to your sitemap, this will also notify Google of your new blog and each time that your blog is updated with a new post.
  • Submit to Blog Directories – TopRank has a huge list of sites you can submit your RSS feed and blog to. This will help you build even more incoming links. If you aren’t ready to do them all, at least start with Technorati as it is one of the top blog directories. Once you have a good amount of content, also try Alltop.

The Results

Once your website or blog is indexed, you’ll start to see more traffic from Google search. Plus, getting your new content discovered will happen faster if you have set up sitemaps or have a RSS feed. The best way to ensure that your new content is discovered quickly is simply by sharing it on social media networks through status updates, especially on Google+.
Also remember that blog content is generally crawled and indexed much faster than regular pages on a static website, so consider having a blog that supports your website. For example, if you have a new product page, write a blog post about it and link to the product page in your blog post. This will help the product page get found much faster by the Googlebot!
What other techniques have you used to get a new website or blog indexed quickly? Please share in the comments!

Reference fron Author: Kristi Hines is a freelance writer, professional blogger

Sunday, 11 November 2012

Installation and configuration of a DNS server in Windows Server 2008

Installation and configuration of a DNS server Using Windows Server 2008


Why we need a DNS Server in LAN (Local Area Network):

You must know importance of DNS server can be judge from the fact that without DNS, computers would have a very tough time communicating with each other. However, most Windows administrators still rely on WINS for name resolution on local area networks and some have little or no experience with DNS. Steven Warren explains how to install, configure, and troubleshoot a Windows Server 2008 DNS server.

I don't want to go in other details but as many of you are probably aware, the Domain Name System (DNS) is now the name resolution system of choice in Windows (any Microsoft or non Microsoft server). Without it, computers would have a very tough time communicating with each other. However, most Windows administrators still rely on the Windows Internet Name Service (WINS) for name resolution on local area networks and some have little or no experience with DNS. If you fall into this category, read on. We'll explain how to install, configure, and troubleshoot a Windows Server 2008 DNS server.

Installation Step by Step

You can install a DNS server from the Control Panel or when promoting a member server to a domain controller (DC) (Figure A). During the promotion, if a DNS server is not found, you will have the option of installing it.
 
Figure-A

 
                 
To install a DNS server from the Control Panel, follow these steps:
  • From the Start menu, select | Control Panel | Administrative Tools | Server Manager.
  • Expand and click Roles (Figure B).
  • Choose Add Roles and follow the wizard by selecting the DNS role (Figure C).
  • Click Install to install DNS in Windows Server 2008 (Figure D).

Figure B


 
 
Expand and click Roles

Figure C

 

 

DNS role

Figure D


 
Install DNS

DNS console and configuration

After installing DNS, you can find the DNS console from Start | All Programs | Administrative Tools | DNS. Windows 2008 provides a wizard to help configure DNS.
When configuring your DNS server, you must be familiar with the following concepts:
  • Forward lookup zone
  • Reverse lookup zone
  • Zone types
A forward lookup zone is simply a way to resolve host names to IP addresses. A reverse lookup zone allows a DNS server to discover the DNS name of the host. Basically, it is the exact opposite of a forward lookup zone. A reverse lookup zone is not required, but it is easy to configure and will allow for your Windows Server 2008 Server to have full DNS functionality.
When selecting a DNS zone type, you have the following options: Active Directory (AD) Integrated, Standard Primary, and Standard Secondary. AD Integrated stores the database information in AD and allows for secure updates to the database file. This option will appear only if AD is configured. If it is configured and you select this option, AD will store and replicate your zone files.
A Standard Primary zone stores the database in a text file. This text file can be shared with other DNS servers that store their information in a text file. Finally, a Standard Secondary zone simply creates a copy of the existing database from another DNS server. This is primarily used for load balancing.
To open the DNS server configuration tool:
  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Highlight your computer name and choose Action | Configure a DNS Server... to launch the Configure DNS Server Wizard.
  3. Click Next and choose to configure the following: forward lookup zone, forward and reverse lookup zone, root hints only (Figure E).
  4. Click Next and then click Yes to create a forward lookup zone (Figure F).
  5. Select the appropriate radio button to install the desired Zone Type (Figure G).
  6. Click Next and type the name of the zone you are creating.
  7. Click Next and then click Yes to create a reverse lookup zone.
  8. Repeat Step 5.
  9. Choose whether you want an IPv4 or IPv6 Reverse Lookup Zone (Figure H).
  10. Click Next and enter the information to identify the reverse lookup zone (Figure I).
  11. You can choose to create a new file or use an existing DNS file (Figure J).
  12. On the Dynamic Update window, specify how DNS accepts secure, nonsecure, or no dynamic updates.
  13. If you need to apply a DNS forwarder, you apply it on the Forwarders window. (Figure K).
  14. Click Finish (Figure L)

Figure E


 
 

Figure F


 
 

Forward lookup zone

Figure G

 

 

Desired zone

Figure H


 

IPv4 or IPv6

Figure I

Reverse lookup zone

Figure J


Choose new or existing DNS file

Figure K

 


Forwarders window

Figure L

 
 
 
Now installation is finished. From here we will start configuring DNS server for connectivity of your client and other services to resolve naming.

Managing DNS records

You have now installed and configured your first DNS server, and you're ready to add records to the zone(s) you created. There are various types of DNS records available. Many of them you will never use. We'll be looking at these commonly used DNS records:
  • Start of Authority (SOA)
  • Name Servers
  • Host (A)
  • Pointer (PTR)
  • Canonical Name (CNAME) or Alias
  • Mail Exchange (MX)

Start of Authority (SOA) record

The Start of Authority (SOA) resource record is always first in any standard zone. The Start of Authority (SOA) tab allows you to make any adjustments necessary. You can change the primary server that holds the SOA record, and you can change the person responsible for managing the SOA. Finally, one of the most important features of Windows 2000 is that you can change your DNS server configuration without deleting your zones and having to re-create the wheel (Figure M).

Figure M

 

 

 Change configuration

Name Servers

Name Servers specify all name servers for a particular domain. You set up all primary and secondary name servers through this record.
To create a Name Server, follow these steps:

  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Expand the Forward Lookup Zone.
  3. Right-click on the appropriate domain and choose Properties (Figure N).
  4. Select the Name Servers tab and click Add.
  5. Enter the appropriate FQDN Server name and IP address of the DNS server you want to add.

Figure N



 

Name Server

Host (A) records

A Host (A) record maps a host name to an IP address. These records help you easily identify another server in a forward lookup zone. Host records improve query performance in multiple-zone environments, and you can also create a Pointer (PTR) record at the same time. A PTR record resolves an IP address to a host name.
To create a Host record:

  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Expand the Forward Lookup Zone and click on the folder representing your domain.
  3. From the Action menu, select New Host.
  4. Enter the Name and IP Address of the host you are creating (Figure O).
  5. Select the Create Associated Pointer (PTR) Record check box if you want to create the PTR record at the same time. Otherwise, you can create it later.
  6. Click the Add Host button.
  7.  

Figure O

 

 
 
 
 
 
 
 
 
 
 

To ad a Host (A) record

Pointer (PTR) records

A Pointer (PTR) record creates the appropriate entry in the reverse lookup zone for reverse queries. As you saw in Figure H, you have the option of creating a PTR record when creating a Host record. If you did not choose to create your PTR record at that time, you can do it at any point.
To create a PTR record:

  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Choose the reverse lookup zone where you want your PTR record created.
  3. From the Action menu, select New Pointer (Figure P).
  4. Enter the Host IP Number and Host Name.
  5. Click OK.
  6.  

Figure P

 

 

New Pointer

Canonical Name (CNAME) or Alias records

A Canonical Name (CNAME) or Alias record allows a DNS server to have multiple names for a single host. For example, an Alias record can have several records that point to a single server in your environment. This is a common approach if you have both your Web server and your mail server running on the same machine.
To create a DNS Alias:

  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Expand the Forward Lookup Zone and highlight the folder representing your domain.
  3. From the Action menu, select New Alias.
  4. Enter your Alias Name (Figure Q).
  5. Enter the fully qualified domain name (FQDN).
  6. Click OK.
  7.  

Figure Q


 

Alias Name

Mail Exchange (MX) records

Mail Exchange records help you identify mail servers within a zone in your DNS database. With this feature, you can prioritize which mail servers will receive the highest priority. Creating MX records will help you keep track of the location of all of your mail servers.
To create a Mail Exchange (MX) record:

  1. Select DNS from the Administrative Tools folder to open the DNS console.
  2. Expand the Forward Lookup Zone and highlight the folder representing your domain.
  3. From the Action menu, select New Mail Exchanger.
  4. Enter the Host Or Domain (Figure R).
  5. Enter the Mail Server and Mail Server Priority.
  6. Click OK.

Figure R


 

Host or Domain

Other new records

You can create many other types of records. For a complete description, choose Action | Other New Records from the DNS console (Figure S). Select the record of your choice and view the description.

Figure S


 

Create records from the DNS console

Troubleshooting DNS servers

We will not going in depth to tell you about troubleshooting perhaps troubleshooting DNS servers, the nslookup utility will become your best friend. This utility is easy to use and very versatile. It's a command-line utility that is included within Windows 2008. With nslookup, you can perform query testing of your DNS servers. This information is useful in troubleshooting name resolution problems and debugging other server-related problems. You can access nslookup (Figure T) right from the DNS console.

Figure T


 
 
Thanks for reading, must your inputs regarding my this blog post ....
Your comments will encourage me to explore more and improve contents to help others for their problems. Fee free to write me on arsalanh2000@gmail.com or twitter: arsalanh2000

Don't forget to click Follow or rate my blog ... 

 
 

Saturday, 31 March 2012

How to secure a LAMP server on CentOS or RHEL

LAMP is a software stack composed of Linux (an operating system as a base layer), Apache (a web server that "sits on top" of the OS), MySQL (or MariaDB, as a relational database management system), and finally PHP (a server-side scripting language that is used to process and display information stored in the database).
 
In this article we will assume that each component of the LAMP stack is already up and running, and will focus exclusively on securing the LAMP server(s). We must note, however, that server-side security is a vast subject, and therefore cannot be addressed adequately and completely in a single article.
 
In this post, we will cover the essential must-do's to secure each part of the stack.

Securing Linux

Since you may want to manage your CentOS server via ssh, you need to consider the following tips to secure remote access to the server by editing the /etc/ssh/sshd_config file.
 
1) Use key-based authentication, whenever possible, instead of basic authentication (username + password) to log on to your server remotely. We assume that you have already created a key pair with your user name on your client machine and copied it to your server.
 
1
2
3
PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes

2) Change the port where sshd will be listening on. A good idea for the port is a number higher than 1024:
 
1
Port XXXX
3) Allow only protocol 2:
1
Protocol 2

4) Configure the authentication timeout, do not allow root logins, and restrict which users may login, via ssh:
1
2
3
LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa

5) Allow only specific hosts (and/or networks) to login via ssh:
In the /etc/hosts.deny file:
1
sshd: ALL

In the /etc/hosts.allow file:
1
sshd: XXX.YYY.ZZZ. AAA.BBB.CCC.DDD

where XXX.YYY.ZZZ. represents the first 3 octets of an IPv4 network address and AAA.BBB.CCC.DDD is an IPv4 address. With that setting, only hosts from network XXX.YYY.ZZZ.0/24 and host AAA.BBB.CCC.DDD will be allowed to connect via ssh. All other hosts will be disconnected before they even get to the login prompt, and will receive an error like this:
 

 
(Do not forget to restart the sshd daemon to apply these changes: service sshd restart).
We must note that this approach is a quick and easy -but somewhat rudimentary- way of blocking incoming connections to your server. For further customization, scalability and flexibility, you should consider using plain iptables and/or fail2ban.

Securing Apache

1) Make sure that the system user that is running Apache web server does not have access to a shell:
# grep -i apache /etc/passwd
If user apache has a default shell (such as /bin/sh), we must change it to /bin/false or /sbin/nologin:
# usermod -s /sbin/nologin apache
The following suggestions (2 through 5) refer to the /etc/httpd/conf/httpd.conf file:
2) Disable directory listing: this will prevent the browser from displaying the contents of a directory if there is no index.html present in that directory.
Delete the word Indexes in the Options directive:
1
2
3
4
5
# The Options directive is both complicated and important.  Please see
# http://httpd.apache.org/docs/2.2/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks
Should read:
1
Options None
In addition, you need to make sure that the settings for directories and virtual hosts do not override this global configuration.
Following the example above, if we examine the settings for the /var/www/icons directory, we see that "Indexes MultiViews FollowSymLinks" should be changed to "None".
<Directory "/var/www/icons">
    Options Indexes MultiViews FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>
<Directory "/var/www/icons">
    Options None
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>
Before After
3) Hide Apache version, as well as module/OS information in error (e.g. Not Found and Forbidden) pages.
1
2
ServerTokens Prod # This means that the http response header will return just "Apache" but not its version number
ServerSignature Off # The OS information is hidden
4) Disable unneeded modules by commenting out the lines where those modules are declared:
TIP: Disabling autoindex_module is another way to hide directory listings when there is not an index.html file in them.
5) Limit HTTP request size (body and headers) and set connection timeout:
DirectiveContextExample and meaning
LimitRequestBody server config, virtual host, directory, .htaccess Limit file upload to 100 KiB max. for the uploads directory:
1
2
3
<Directory "/var/www/test/uploads">
   LimitRequestBody 102400
</Directory>
This directive specifies the number of bytes from 0 (meaning unlimited) to 2147483647 (2GB) that are allowed in a request body.
LimitRequestFieldSize server config, virtual host Change the allowed HTTP request header size to 4KiB (default is 8KiB), server wide:
1
LimitRequestFieldSize 4094
This directive specifies the number of bytes that will be allowed in an HTTP request header and gives the server administrator greater control over abnormal client request behavior, which may be useful for avoiding some forms of denial-of-service attacks.
TimeOut server config, virtual host Change the timeout from 300 (default if no value is used) to 120:
1
TimeOut 120
Amount of time, in seconds, the server will wait for certain events before failing a request.
For more directives and instructions on how to set them up, refer to the Apache docs.

Securing MySQL Server

We will begin by running the mysql_secure_installation script which comes with mysql-server package.
1) If we have not set a root password for MySQL server during installation, now it's the time to do so, and remember: this is essential in a production environment.
The process will continue:
 
2) Remove the anonymous user:
3) Only allow root to connect from localhost:
4) Remove the default database named test:
5) Apply changes:

6) Next, we will edit some variables in the /etc/my.cnf file:
1
2
3
4
[mysqld]
bind-address=127.0.0.1 # MySQL will only accept connections from localhost
local-infile=0 # Disable direct filesystem access
log=/var/log/mysqld.log # Enable log file to watch out for malicious activities

Don't forget to restart MySQL server with 'service mysqld restart'.
Now, when it comes to day-to-day database administration, you'll find the following suggestions useful:
  • If for some reason we need to manage our database remotely, we can do so by connecting via ssh to our server first to perform the necessary querying and administration tasks locally.
  • We may want to enable direct access to the filesystem later if, for example, we need to perform a bulk import of a file into the database.
  • Keeping logs is not as critical as the two things mentioned earlier, but may come in handy to troubleshoot our database and/or be aware of unfamiliar activities.
  • DO NOT, EVER, store sensitive information (such as passwords, credit card numbers, bank PINs, to name a few examples) in plain text format. Consider using hash functions to obfuscate this information.
  • Make sure that application-specific databases can be accessed only by the corresponding user that was created by the application to that purpose:
To adjust access permission of MySQL users, use these instructions.
First, retrieve the list of users from the user table:
gacanepa@centos:~$ mysql -u root -p
 
Enter password: [Your root password here]
mysql> SELECT User,Host FROM mysql.user;

 
Make sure that each user only has access (and the minimum permissions) to the databases it needs. In the following example, we will check the permissions of user db_usuario:
mysql> SHOW GRANTS FOR 'db_usuario'@'localhost';
 
You can then revoke permissions and access as needed.

Securing PHP

Since this article is oriented at securing the components of the LAMP stack, we will not go into detail as far as the programming side of things is concerned. We will assume that our web applications are secure in the sense that the developers have gone out of their way to make sure that there are no vulnerabilities that can give place to common attacks such as XSS or SQL injection.
1) Disable unnecessary modules:
We can display the list of current compiled in modules with the following command: php -m

And disable those that are not needed by either removing or renaming the corresponding file in the /etc/php.d directory.
For example, since the mysql extension has been deprecated as of PHP v5.5.0 (and will be removed in the future), we may want to disable it:
# php -m | grep mysql
# mv /etc/php.d/mysql.ini /etc/php.d/mysql.ini.disabled
 

2) Hide PHP version information:
# echo "expose_php=off" >> /etc/php.d/security.ini [or modify the security.ini file if it already exists]
 

3) Set open_basedir to a few specific directories (in php.ini) in order to restrict access to the underlying file system:
 

4) Disable remote code/command execution along with easy exploitable functions such as exec(), system(), passthru(), eval(), and so on (in php.ini):
 
1
2
3
allow_url_fopen = Off
allow_url_include = Off
disable_functions = "exec, system, passthru, eval"

Summing Up

1) Keep packages updated to their most recent version (compare the output of the following commands with the output of 'yum info [package]'):
The following commands return the current versions of Apache, MySQL and PHP:
# httpd -v
# mysql -V (capital V)
# php -v

Then 'yum update [package]' can be used to update the package in order to have the latest security patches.
 
2) Make sure that configuration files can only be written by the root account:
# ls -l /etc/httpd/conf/httpd.conf
# ls -l /etc/my.cnf
# ls -l /etc/php.ini /etc/php.d/security.ini

3) Finally, if you have the chance, run these services (web server, database server, and application server) in separate physical or virtual machines (and protect communications between them via a firewall), so that in case one of them becomes compromised, the attacker will not have immediate access to the others. If that is the case, you may have to tweak some of the configurations discussed in this article. Note that this is just one of the setups that could be used to increase security in your LAMP server.

Referred from Open source article ....