Web based system to push your GIT code

Posted by: Admin  :  Category: Web Hosting

Linux Cpanel shared hosting: 600 GB disk space, 6 TB bandwidth, free domain, unlimited databases and ftp accounts, web hosting cheap and pro at Hostony

Hello!

Since posting recently about our Web based SVN push system , we have decided to take what we did there one step further and implement a very similar system for GIT, but with more options!

The web based GIT push system is, as mentioned, very similar to the web based SVN push system, with the exception that you can select branches before exporting the code.

I should stress before continuing that this system is not intended to be publicly visible on a website. Strict access controls need to be implemented in front of this implementation to protect the integrity and protect from malicious users. For example, only making this system available on a Development LAN, or putting it behind an IP restricted firewall, with IP restricted apache/nginx rules, web authentication and SSL will allow for a much more secure implementation of this system. My advice is to always assume everything is vulnerable at any time. Working backwards with that assumption has always been a good policy for me.

First of all the entire solution is available on GitHub for you to preview.

I’ll go through each file individually, briefly explaining what each file does.

index.php
This is a straightforward file. There is a small amount of php code embedded in this file with HTML to present the push page in a simple HTML table. An array is built for all the sites you want to push (in this example case its a Dev and Prod site). The array makes it very easy to add additional sites. Each array propagates a source, destination, site name and site url within.

The only field that is really used is the “pushname” variable in each site array. That variable gets passed to the shell script that actually takes care of the pushing mechanism.

The remaining php code in this file builds a list of sites based on the array, as well as pulling the current branch by running a function included in functions.inc.php that pulls all the branches associated with a repository and saves it to a text file for easy parsing. The other function pulls the last time the site was pushed or “exported”, giving an easy reference when dealing with multiple developers.

It should be noted that it is best to implement apache/nginx web based access on a per-user basis in order to access this page. This is because the index.php file parses the username of who is accessing the site for logging purposes. So every user that needs to access this needs an htpasswd user/password created for them for security and accountability purposes.

functions.inc.php
This file is where many of the functions lie (obviously). There is a crossite scripting function that is used to filter any submit input. I realize this is not very secure, but with the security considerations I mentioned in the beginning of this post, it should suffice. A good systems administrator would implement many hardware, software and intrusion layers to prevent malicious users from injecting content such as snort and mod_security. Nothing beats the security of a completely offline web accessible page on an internal LAN, obviously.

Linux Cpanel shared hosting: 600 GB disk space, 6 TB bandwidth, free domain, unlimited databases and ftp accounts, web hosting cheap and pro at Hostony

Next we have some functions that grab the branches, get the current branch that the site has been previously pushed on, some log file functions for storing the log file info and writing the log data and displaying it as well. All of these functions are intended to help keep the development process very organized and easy to maintain.

gitupdate_process.php
This file is where the index.php file POSTS the data of the site you want to push. This file receives the data as a $ _POST (with the XSS cleaner function mentioned earlier sanitizing as best as it can) and then passes that variable to the push bash shell script in order to do the actual file synchronization.

It might be possible to do all the file synchronization in php, but I felt that separating the actual git pulling and rsync process into a separate shell script made the process less obfuscated and confusing. The shell script rarely needs to change unless a new site is added obviously.

log.php
This file is simply loaded as an iframe within index.php when someone clicks to view the export log. It parses the log.txt file and displays it. The export log format can be customized obviously, but usually would contain the site name, username who pushed, date and time as well as the branch pushed.

log.txt
This is self explanatory and contains the log information detailed in log.php

push.sh
This is the push bash shell script that gitupdate_process.php calls. Again this can be consolidated to be 100% PHP but I felt segmenting it was a good idea. You can see that the command line arguments are parsed from a $ _POST in gitupdate_process.php and then passed to the shell script as an argument. This is very simple and shouldn’t be too hard to understand. The arguments would basically be the site name ($ 1) and the git branch name that was selected from the dropdown box before hitting the export button ($ 2).

That’s it! This package for GIT has made many developers’ life easier and caused less headaches when diagnosing problems or even rolling back to a stable branch. Keeping a stable and organized development environment is key here, with the security considerations I mentioned earlier being paramount above everything else.

I hope that this script was helpful and would welcome any suggestions to improve it further 🙂

The post Web based system to push your GIT code appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

Linux Cpanel shared hosting: 600 GB disk space, 6 TB bandwidth, free domain, unlimited databases and ftp accounts, web hosting cheap and pro at Hostony

Web based system to purge multiple Varnish cache servers

Posted by: Admin  :  Category: Web Hosting

Hello!

We have been working with varnish for quite a while. And there is quite a lot of documentation out there already for the different methods for purging cache remotely via Curl, the varnish admin tool sets and other related methods.

We deal with varnish in the Amazon Cloud as well as on dedicated servers. In many cases varnish sits in a pool of servers in the web stack before the web services such as Nginx and Apache. Sometimes purging specific cache urls can be cumbersome when you’re dealing with multiple cache servers.

Depending on the CMS you are using, there is some modules / plugins that are available that offer the ability to purge Varnish caches straight from the CMS, such as the Drupal Purge module.

We have decided to put out a secure, web accessible method for purging Varnish cached objects across multiple varnish servers. As always, take the word “secure” with a grain of salt. The recommended way to publish a web accessible method on apache or nginx that gives the end-user the ability to request cache pages be purged would be to take these fundamentals into consideration :

– Make the web accessible page available only to specific source IPs or subnets
– Make the web accessible page password protected with strong passwords and non-standard usernames
– Make the web accessible page fully available via SSL encryption

On the varnish configuration side of things, with security still in mind, you would have to set up the following items in your config :

ACL

Set up an access control list in varnish that only allows specific source IPs to send the PURGE request. Here is an example of one :

# ACL For purging cache
acl purgers {
        "127.0.0.1";
        "192.168.0.1"/24;
}

vcl_recv / vcl_hit / vcl_miss / vcl_pass

This is self explanatory (I hope). Obviously you would be integrating the following logic into your existing varnish configuration.

sub vcl_recv {
        if (req.request == "PURGE") {
                if (!client.ip ~ purgers) {
                        error 405 "Method not allowed";
                }
                return (lookup);
        }
}

sub vcl_hit {
        if (req.request == "PURGE") {
                purge;
                error 200 "Purged";
        }
}
sub vcl_miss {
        if (req.request == "PURGE") {
                purge;
                error 404 "Not in cache";
        }
}
sub vcl_pass {
        if (req.request == "PURGE") {
                error 502 "PURGE on a passed object";
        }
}

The code itself is available on our GitHub Project page. Feel free to contribute and add any additional functionality.

It should be important to note that what differentiates our solution among the existing ones out there is that our script will manipulate the host headers of the Curl request in order to submit the same hostname / url request across the array of varnish servers. That way the identical request can be received by multiple varnish servers with no local host file editing or anything like that.

There is lots of room for input sanity checks, better input logic and other options to perhaps integrate with varnish more intuitively. Remember this is a starting point, but hopefully it is useful for you!

The post Web based system to purge multiple Varnish cache servers appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

Add Captcha to Sugar CRM Web to Lead forms

Posted by: Admin  :  Category: Web Hosting

Howdy!

Capturing leads via web based forms is something that is pretty standard in many industries that rely on internet marketing for sales.

One of the many leading CRM (Customer relationship management) systems, which also happens to have an open source “community” edition is Sugar CRM.

Out of the box, Sugar CRM community edition does not offer the ability for anti-spam measures such as captcha. By default, implementing a web to lead form that integrates Sugar onto your public facing website appears to become a magnet for spam form submissions. Spammers can scrape indexed google results for specific fingerprints that are indicative of “spammable” web forms. This can happen quickly after implementing a form, as your site gets re-indexed by google.

Sometimes it can be very bad, which for us, it motivated us to implement reCaptcha (Google’s Captcha library) with the web to lead Sugar CRM forms.

It was much easier than we thought. Here’s how to do it with your Sugar CRM web to lead form :

Implement Recaptcha right near your submit button on the form

Add the following code (or the code in reCaptcha’s latest instructions) :

Its important to note that you’re not fundamentally altering how the Sugar CRM web to lead form works. Your just including the recaptcha library and displaying the captcha input box, with the captcha image of course.

The form, at this point, will still submit and be processed by Sugar regardless of what you enter in the captcha box. The next step is to include the recaptcha “check” in the actual Sugar Lead processing function.

Basically the recaptcha check, out of the box, does a simple check of the captcha input and “dies” if the input is incorrect. If its correct, you can put whatever php code in the “else” statement, which in Sugars case would be the actual form processing.

Process the captcha and submit the lead form

For Sugar CRM 6.5.x, the file you want to edit is modules/Campaigns/WebToLeadCapture.php. This file is supposed to have a check built in that allows you to overwrite this file with a leadCapture_override.php file in the root folder. This allows the changes you make to be “upgrade safe”, meaning that if you upgrade sugar, the changes wont get overwritten.

Here is the recaptcha “check” that verifies captcha input :


Notice the “else” statement at the bottom, thats what you want to have the Sugar code that processes the lead form execute. You dont want Sugar to do ANYTHING if the captcha was not verified.

Edit the WebToLeadCapture.php file and add the above code around line 58, or above the following code that starts checking the html form’s POST values :

if (isset($  _POST['campaign_id']) && !empty($  _POST['campaign_id'])) {

Simply put the else statement right above the above code, and ensure the opening and closing brackets for the recaptcha else statement encompass all the subsequent code, right to the bottom of the file, ensuring the closing bracket is below the following line :

echo $  mod_strings['LBL_SERVER_IS_CURRENTLY_UNAVAILABLE'];

Hopefully this will help reduce your spam entries with your Sugar CRM lead forms!

The post Add Captcha to Sugar CRM Web to Lead forms appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

New Website Security Threats for 2020

Posted by: Admin  :  Category: VPS / Dedicated Servers

For anyone running a website, 2020 promises to be a tough year when it comes to cybersecurity. According to a range of security experts, not only will you have to deal with the many existing risks; there will also be a raft of emerging threats, many of them highly advanced. Here, we’ll look at some …

Web Hosting UK Blog

PHP 5.5.36 is available

Posted by: Admin  :  Category: Php

PHP.net news & announcements

Auto updating Atomicorp Mod Security Rules

Posted by: Admin  :  Category: Web Hosting

Hello!

If any of you use mod_security as a web application firewall, you might have enlisted the services of Atomicorp for regularly updating your mod_security ruleset with signatures to protect against constantly changing threats to web applications in general.

One of the initial challenges, in a managed hosting environment, was to implement a system that utilizes the Atomicorp mod_security rules and update them regularly on an automated schedule.

When you subscribe to their service, they provide access credentials in order to pull the rules. You then need to integrate the rule files into your mod_security implementation and gracefully restart apache or nginx to ensure all the updated rules are loaded.

We developed a very simple python script, intended to run as a cron scheduled task, in order to accomplish this. We thought we would share it here in case anyone else may find it useful at all to accomplish the same thing. This script could easily be modified to download rules from any similar service, alternatively. This script was written for nginx, but can be changed to be integrated with apache.

Find the code below. Enjoy!

#!/usr/bin/python
import urllib2,re,requests,tarfile,os,time

username = 'yourusername'
password = 'yourpassword'
# create a password manager
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
top_level_url = "http://updates.atomicorp.com/channels/rules/subscription/"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
#data = urllib2.urlopen('http://updates.atomicorp.com/channels/rules/subscription/VERSION')

for line in urllib2.urlopen('http://updates.atomicorp.com/channels/rules/subscription/VERSION'):
    if 'MODSEC_VERSION' in line:
        var = line.split('=',1)
        version = var[1].replace('n', '')

# they throttle connection requests
time.sleep(10)

atomicdl = 'http://updates.atomicorp.com/channels/rules/subscription/modsec-' + version + '.tar.gz'
atomicfile = urllib2.urlopen(atomicdl)
output = open('/etc/nginx/modsecurity.d/modsecrules.tar.gz', 'wb')
output.write(atomicfile.read())
output.close()

tar = tarfile.open('/etc/nginx/modsecurity.d/modsecrules.tar.gz', 'r:gz')
tar.extractall('/etc/nginx/modsecurity.d/')
tar.close()

os.system("rsync -ravzp /etc/nginx/modsecurity.d/modsec/ /etc/nginx/modsecurity.d")
os.system("rm -rf /etc/nginx/modsecurity.d/modsec /etc/nginx/modsecurity.d/modsecrules.tar.gz")
os.system("sed -i '//d' /etc/nginx/modsecurity.d/*.conf")

The post Auto updating Atomicorp Mod Security Rules appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star