Terminate a running CentOS program using xkill

Standard

When an application is unresponsive in CentOS it sometimes requires the task be terminated. This is a neat trick to closing a application with minimal command line use.

Start by opening your terminal. Then type the following command:

Now click on the window you want to terminate. The program will automatically get the pid from the application window and terminate the process.

Output after click:

 

Backup filesystem to Amazon S3

Standard

Every server needs to be backed up periodically. The trouble is finding an affordable place to store your filesystem if it contains large amounts of data. Amazon S3 is the solution with reasonably priced standard storage ($0.0300 per GB), as well as reduced redundancy storage ($0.0240 per GB) at the time of writing this article. Updated pricing can be seen at http://aws.amazon.com/s3/pricing/

This short tutorial will show how to backup a servers filesystem using s3cmd. S3cmd is a command line tool for uploading, retrieving, and managing data in Amazon S3. This implementation will use a cronjob to automate the backup processing. The filesystem will be scheduled to be synced nightly.

How to install s3cmd?

This example assumes you are using CentOS, or RHEL. The s3cmd library is included in the default rpm repositories.

After installation the library will be ready to configure.

Configuring s3cmd

An Access Key and Secret Key are required from your AWS account. These credentials can be found on the IAM page.

Start by logging in to AWS and navigating to the Identity & Access Management (IAM) service. Here you will first create a new user. I have excluded my username below.

aws-create-new-user

Create a new user

Next create a group. This group will hold the permission for our user to be able to access all your S3 buckets. Notice under permissions the group has been granted the right to “AmazonS3FullAccess” which means any user in this group can modify any S3 bucket. To grant your new user access to the group click “Add Users to Group” and select your new user from the list.

aws-create-group

Create a new group and give it the “AmazonS3FullAccess” permission. Then add your new user to this group.

For s3cmd to connect to AWS it requres a set of user security credentials. Generate an access key for the new user by navigating back to the user details page. Look to the bottom of the page for the “Security Credentials” tab. Under Access Key click “Create Access Key”. It will generate a Access Key ID and Secret Access Key. Both these are required for configuring s3cmd.

aws-create-access-key

On the user details page generate a new access key

You now have a user setup with permissions to access the S3 API. Back on your server you need to input your new access key into s3cmd. To begin configuration type:

You should now see this page and be able to enter your Access Key Id and Secret Key.

At this point s3cmd is fully configured and ready to push data to S3. The final step is to create your own S3 bucket. This bucket will serve as the storage location for our filesystem.

Setting up your first S3 bucket

Navigate to the AWS S3 service and create a new bucket. You can give the bucket any name you want and pick the region for the data to be stored. This bucket name will be used in the s3cmd command.

aws-create-s3-bucket

Each file pushed to S3 is given a storage category of standard or reduced redundancy storage. This is configurable when syncing files. For the purpose of this tutorial all files will be stored in reduced redundancy storage.

Standard vs Reduced Redundancy Storage

The primary difference between the two options is durability; or how quickly do you need access to your data. Standard storage gives you nearly instant access to your data, where as reduced redundancy storage (RRS) may take up to several hours to retrieve the file(s). For the use case of this tutorial all files are storage in RRS. As noted previous RRS is considerably cheaper than standard storage.

Configuring a simple cronjob

To enter the cronjob editor simply type

Once in the editor create the cronjob below which will run Monday – Friday at 3:30 a.m. every morning.

This cronjob calls the s3cmd sync command and loads the default configuration which you have entered above. The –delete-removed option tells s3cmd to scan for locally deleted files, then remove them from the remove S3 bucket as well. The –reduced-redundancy option places all files in RRS for cost savings. Any folder location can be synced, just change the path to your desired location. Make sure to change mybucket to the name of your S3 bucket.

The server has now been configured to do nightly backups of the filesystem to AWS S3  using s3cmd library. Enjoy!

Using tmpwatch to free resources

Standard

Tmpwatch is a service that can recursively remove files that haven’t been accessed for a given period of time. In the case of CentOS it comes standard. If not enabled to run periodically, the tmp folder will expand until either the server is restarted, or it hits its disk resource limit. If the tmp folder does become to large, all programs that rely on temporary files will fail.

Ex: A apache webserver runs a php script which logs information for referencing. These log files are unwritable due to a lack of disk resources.

It will appear to be a permissions read / write error. However, a simple execution of the following tmpwatch command will free up space and delete all files older than 12 hours.

Note: Never delete all files in the tmp folder as they may be utilized for semaphore locking by various applications (mysql).

Installing EPEL repo on CentOS 7.x

Standard

The EPEL (Extra Packages for Enterprise Linux) repository offers a variety of packages that can enhance your programming experience. These packages compliment and extend the base packages that come with CentOS. Installing EPEL on CentOS 7 is straightforward (the following commands assume you have root privileges):

That’s it. All the packages in the EPEL repo for CentOS 7.x and Red Hat Enterprise Linux (RHEL) version 7.x are now at your fingertips

Apache 403 Forbidden / permissions not set

Standard

When setting up a fresh install of apache on centos 6.x you may encounter a “403 forbidden” error stating proper permissions have not been set to access the index.html file.

This is due to SELinux not recognizing changed files in the document root. The cause comes from when you move “mv” files around. The original context is preserved in the kernels security module. To update SELinux you simply need to tell it to recursively index all files in your web directory using restorecon.

Now all files should be accessible for apache.

Tomcat fresh install on Amazon EC2 Redhat Instance

Standard

This tutorial will demonstrate how to install a fresh version of apache tomcat 7.0.53 from source on an Amazon EC2 Redhat based instance. Including the installation of mysql, vsftpd, ssl (forced for the entire tomcat server), and iptables prerouting.

To begin, login to your EC2 instance and do a quick yum update. This will assure that all of your virtual machine’s libraries are up to date.

When prompted, type “yes” to install updates. This update process can last several minutes.

The first library to install will be mysql. Run the following commands to install the server.

Once installed turn on mysql to the chkconfig. This command makes it so mysql will automatically start on server reboot.

Now you must configure mysql. Begin by starting the service.

It will output the following message:

Run the following command to set your new password for root login.

Now login to mysql terminal by typing the following:

It will prompt you for your password that you have just set above. Next step is to set up user permissions. This is accomplished by first creating a user, then assigning them permissions to access a given database.

Mysql is now ready to use, you now have a user that should have grant permissions to access a given database (if you made one).

The next step is to setup apache tomcat 7.0.52. Navigate to the opt directory of your server. Then download the Tomcat file and extracting it.

Tomcat comes loaded will all the files you need. You can test running the server by navigating to the bin directory and running the startup script.

Note: If tomcat fails to start; check to make sure that java jdk is installed.

If no installation of java is found using yum install jdk 1.7

It would be much nicer if you could start / stop the server like a service ex. “service tomcat start”. If you want tomcat to run as a server read the Tomcat Service Script tutorial.

Now I want tomcat to run on port 80. Port 80 is the standard port for all internet traffic. To direct traffic from port 80 to tomcat please follow my “Running Tomcat port 80” guide.

The next step is to enable SSL for security. In my case I want SSL to be force / required on all requests. Let’s say I have private data being transmitted so this is necessary.

First edit the conf/server.xml file. Note that the tomcat.keystore file should point to the location you placed your keystore file on the webserver. I have placed my in the root of the tomcat server.

To force SSL on all connections edit the conf/web.xml file. At the end of the file before the closing tag add:

Tomcat will now force SSL on all incoming connections, it is ready for your war file. To upload a war file we need a ftp client. By default this Redhat instance does not come with the libraries configured. I choose to use vsftpd.

The next step is to configure permissions.

Look for the following lines and uncomment / modify.

After edits are made, restart the service.

Finally, you need to add a user to the system to login as.

Your server should now accept incoming connections via port 21 (FTP).

Once you login you will only have access to your home directory. Hence, you will not be able / have permissions to upload to the tomcat server directory in the opt folder. To fix this add a symbolic link in your home directory to the webapps directory of the tomcat installation.

Fixing offending key in SSH known hosts

Standard

In the event the ip address of your server changes and you are using a private key there is a quick fix. From the example above you see that my offending key is “known_hosts:1” or key 1. To fix the error lets remove the line 1 of the known hosts. This solution was performed in CentOS 6.x using the sed command. The sed command is used for processing of files. Sed stands for Stream Editor which parses text files and used for making textual transformations to a file. The command is applied on the specified file on a line by line basis.

After running this command the offending key should be removed, and you should be prompted to add the new ip of the server to the known hosts.

 

Running tomcat port 80

Standard

The Hypertext Transfer Protocol (HTTP) is the foundation of data communication for the web. By default Tomcat does not use port 80 for communication. Tomcat runs on port 8080 instead. Using iptables all traffic can be pre-routed from port 80 to port 8080, or all traffic from port 443 (SSL) to port 8443 (tomcat SSL port). This walkthrough shows how to setup port 80 forwarding in Centos 6.x.

To do this modify your iptables file and replace the contents with the following.

Past in the following:

Finally, restart iptables to apply the changes:

Apache Archiva 5 min install

Standard

Apache Archiva is a quick and easy solution to set up your own repository management server. In this example I use CentOS 6.x for my OS.

How To Install / Configure:

Start by downloading the standalone version of Archiva. I suggest placing it in the opt directory for reasons listed.

Now you need to specify the port for Archiva to run on. The default port is 8080 which can cause conflicts if you are using Tomcat which also defaults to 8080. I have changed the port to 8081.

/opt/apache-archiva-2.0.1/conf/jetty.xml

Now at this point Archiva is ready to run. You can start Archiva by the following command.

Archiva can now be accessed by going to http://localhost:8081/ in your browser. A simple GUI will allow you to setup administrative privileges.

Running as a service script

The above installation is great but begs for better integration with CentOS. On Linux, the bin/archiva script is suitable for linking from the /etc/init.d/ directory. Creating a custom service script in this directory will allow you to start / stop / restart Archiva easily. This directory is used to control services within the OS.

Start by creating the archiva service file

I have chmod the archiva file so we can execute it as root. Then add the file the script below:

Test the service script above by running the following commands. It should gracefully control the service.

I don’t like to have to start archiva everytime I restart my server. Add Archiva to the chkconfig so it will automatically start on restart.

Apache Archiva 2.0.1 is now installed on CentOS.

Dropbox repository error on CentOS 6.x

Standard

Installing Dropbox on CentOS 6.x causes a error coming from the repo:

The repo can be fixed by modifying the /etc/yum.repos.d/dropbox.repo file. Locate on line 3 the variable $releasever and replace it with 19. The end result is below will work with fedora 16, 17, 18, 19, 20.

dropbox.repo

Test the results using yum