cPanel, the Hosting Platform of Choice, Announces Partnership with CloudLinux extending support for Linux Systems Based on RHEL 6 and CentOS 6

Posted by: Admin  :  Category: Cpanel

Linux Cpanel shared hosting: 600 GB disk space, 6 TB bandwidth, free domain, unlimited databases and ftp accounts, web hosting cheap and pro at Hostony

Houston, Texas, July 31, 2019 – cPanel® is excited to announce a partnership with CloudLinux to extend support for systems running Red Hat® Enterprise Linux® 6 and CentOS 6 by nearly 4 years. Released in 2011, Red Hat Enterprise Linux (RHEL) 6 and CentOS 6 have been receiving only security …
cPanel Newsroom

Linux Cpanel shared hosting: 600 GB disk space, 6 TB bandwidth, free domain, unlimited databases and ftp accounts, web hosting cheap and pro at Hostony

Hidden Reasons Why Majority of Webmasters Prefer a USA Based Hosting Company Over an India Hosting Company

Posted by: Admin  :  Category: Web Hosting Comparison

A compare website hosting question:

I want to host the website and I found that some of the Indian company are providing the cheap rates as compared to the one in the USA based company for the same type of the web hosting plans. Can some one tell me the pros and cons of hosting on USA based server. Your guidance in this regards will be very helpful to me in my decision making.


Expert Answer: You are not alone Web hosting is one of the most important part of your online presence and can mean the success and failure for your business.

Common Mistakes

Some web owners are confuse if it matters where they would host their site such as hosting a specific server in India or get a US based web hosting provider. The truth is you can use any web hosting provider you like located at any point of the globe as long as it can host your site globally and have a reliable server.

Clarification

Getting a web hosting provider is a serious factor in achieving success that you need to consider carefully. Remember that your web host can make or break your site’s online presence. You need to find a reliable web host that can meet your web hosting requirements. Aside from satisfying your requirement, you also need to find a web host that has a good reputation in their web hosting service, customer support, money back guarantee policy, customer’s feedback and reliable server before you signed up with then to prevent any future regrets.

Some consider the price and get the cheapest webhosting provider. But here is the catch: most cheapest web hosting plans are for more than a year(s) of single prepayment which might not be ideal to those website owners that don’t need to host their websites that long in a particular company or want to change a hosting provider once they found out that their web host is unreliable. Another issue is whether to get a US based hosting company or used a local provider.

Here are some edge why most webmaster preferred a US based company:

  • they use an up to date technology
  • reliable servers are located on various locations
  • more knowledgeable and experience personnel
  • offer longer money back guarentee period
  • provide more web hosting features

To answer the above query, here are 7 qualities to look for in a good web hosting. An excellent hosting:

  1. has cPanel so you can manage the hosting account easily,
  2. gives more-than-enough diskspace,
  3. offers unlimited bandwidth,
  4. gives multiple add-on domains,
  5. has easy-to-use Site Builder,
  6. has Fantastico and QuickInstall to quickly install apps like WordPress, Joomla, Drupal, OSCommerce, ZenCart and more
  7. can be easily upgraded to VPS or one of the cheapest dedicated server plans if and when your business require.

Best Hosting 2015

hostgator|web space freeAnswering the above question, many experienced web developers suggest to try Hostgator. If you need an “unlimited” hosting plan, experts usually highly recommend going with Hostgator because they are the best domain hosting around.

What if you find out that Hostgator sucks after you signed up with them? HG has an amazingly long, 45 days money back guarantee so you have ample time to test them out. They are rated A+ by Better Business Bureau which shows their commitment to customer satisfaction. You also do not have to pay for the first month. You can just try their fully functional hosting service – you pay only $ 0.01 (you need to use the special coupon). Do you know any other hosting company that can give you that kind of assurance? Anybody can grab an account from Hostgator for almost FREE.

How to Get an Unlimited Hosting Plan for Only 1 Cent

Click the coupon below try Hostgator cPanel hosting for almost free. If you already know that Hostgator is what you want, you can even save 25% off the normal price with the Hostgator coupon 2015 below.

Why You May Want to Avoid Hostgator

To be honest, no web hosting service is perfect. Drawback of Hostgator includes:

  1. No free domain name – but you can easily get a domain name from the best domain registrar such as Godaddy or Namecheap for $ 10 or less. That is fairly cheap considering a domain name costs no more than a few cups of coffee.
  2. You need to pay full price after your first invoice – well, they need to make money too and all the support and great service do come with a cost

If you are still not sure if HG is right for you, or you have specific query about anything at all, try the Live Chat at HG. They are fast and knowledgeable. Just shoot them some questions before you decide.

Hostgator discount code

More than 7 million site owners depend on Hostgator for their hosting need.

p.s:
HostGator is having a 20% off sale right now
but we have got a better deal for you. Just enter WEBTEMPLATE in the coupon code field when you buy any HostGator hosting plan and you’ll get it for 25% off!

Web Hosting Comparison

A Web based system to push your SVN code through development, staging and production environments

Posted by: Admin  :  Category: Web Hosting

Note the files in this post are now on GitHub

Hello there!

In development, having a seamlessly integrated process where you can propagate your code through whatever QA, testing and development policy you have is invaluable and a definite time saver.

We work with SVN as well as GIT code repository systems and have developed a web based system to “Export” or “Push” the code through development, staging and production environments as such.

I have already talked about sanitizing your code during the commit process, to ensure commit messages are standard and there are no PHP fatal errors, so now I will be showcasing you a simple web based system for propagating your code through development, staging and production servers.

This system should be on a secure web accessible page on each server. For the sake of argument , I’ll call each server the following :

dev.server.com — development server

staging.server.com — staging server

www.server.com — production server

We will be using PHP for the web based interface, and we will assume that you will be password protecting access to this page via htpasswd, as well as forcing SSL. I am also assuming that within your SVN repository, you have multiple “sites” that you will be individually pushing or exporting (svn export). Once you have the secure, password protected page (lets call it https://dev.server.com/svn) , the following PHP page will be the main index :

svnupdate.php

<?

$  sites[] = array(
"name" => "Site A",
"url" => "http://site-a.server.com",
"path" => "/usr/local/www/site-a.server.com",
"source" => "svn://svn.server.com/repository/branches/site-a",
"login" => "svnlogin",
"base" => "1.00",
"notes" => "Standard build for Site A"
);

"name" => "Site B",
"url" => "http://site-b.server.com",
"path" => "/usr/local/www/site-b.server.com",
"source" => "svn://svn.server.com/repository/branches/site-b",
"login" => "svnlogin",
"base" => "1.00",
"notes" => "Standard build for Site B"
);

?>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>
<head>
        <title>SVN Update Page</title>

<style>
body {
        background-color:#eeeeee;
}

.tdheader {
        background-color:#0f2c66;
        color:#FFFFFF;
        font-weight: bold;
}

.tdheader2 {
        background-color:#000000;
        color:#FFFFFF;
        font-weight: bold;
}

.tdrow {
        background-color:#ffffff;
        color:#000000;
        font-weight: normal;
}

a:link,a:active,a:visited {
        color:#0f2c66;
}

a:hover {
        color:#6A9CD3;
}
.menuon {
        background-color:#6699cc;
        color:white;
        font-weight: bold;
}

.menuoff {
        background-color:white;
        color:black;
        font-weight: bold;
}

table {
        border-style: solid;
        border-width: 1px;
        border-color: #000000;
}

</style>
<script type="text/javascript">
function confirmexport(text) {
        if (confirm(text)) {
                document.getElementById('framecont').style.display = '';
                document.getElementById("processframe").contentWindow.document.body.innerHTML = "<div align='center'>Exporting...</div>";
                return true;
        } else return false;
}
function viewframe() {
        document.getElementById('framecont').style.display = '';
}
function closeframe() {
        document.getElementById('framecont').style.display = 'none';
}

</script>
</head>
<body>
<table width="750px" cellpadding="2" cellspacing="1" bgcolor="#000000" border="0">
<tr>
<td class="tdheader2">Server: </td>
<td class="menuon" align="center">Development Server</td>
<td class="menuoff" align="center"><a href="https://staging.server.com/svn/svnupdate.php">Staging Server</a></td>
<td class="menuoff" align="center"><a href="https://www.server.com/svn/svnupdate.php">Production Server</a></td>
</tr>
</table>

<hr size="1" noshade="noshade" />
<a href="log.php" target="processframe" onclick="closeframe();viewframe()">View Development Export Log</a> 
<br><br>
<table cellpadding="2" width="1000px" cellspacing="1" bgcolor="#000000" border="0">
<tr class="tdheader">
<td>Site</td>
<td>Source</td>
<td>UN/PW</td>
<td>Base</td>
<td>Revision</td>
<td>Export</td>
<td>Pending Updates</td>
<td>Notes</td>
</tr>

<?
if($  sites) {
foreach($  sites as $  key => $  value) {
?>
<form method="post" action="svnupdate_process.php" target="processframe">
<tr class="tdrow">
<td><a href="<?=$  value['url']?>" target="_blank"><?=$  value['name']?></a></td>
<td><?=preg_replace("/svn://svn.server.com//","",$  value['source'])?><input type="hidden" name="source" value="<?=$  value['source']?>"></td>
<td><?=$  value['login']?></td>
<td><?=$  value['base']?></td>
<td><input type="text" name="revision" size="5"></td>
<td><input type="hidden" name="site" value="<?=$  value['path']?>">
<input type="submit" name="submitbutton" value="Export" onClick="javascript:return confirmexport('This will overwrite the current files on development. Are you sure?');">
</td>
<td width="150px"><center><a href="viewcommit.php?name=<?=$  value['name']?>&path=<?=$  value['path']?>&svn=<?=$  value['source']?>" target="processframe" onclick="closeframe();viewframe()">View</a></center></td>
<td><?=$  value['notes']?></td>
</tr>
</form>
<? } ?>
<? } ?>
</table>
<br><div id='framecont' style="text-align: left; display: none">
<iframe name="processframe" id="processframe" width="1000px" height="300px" align="left" scrolling="yes" frameborder="0">
                </iframe>
</div>
</body>
</html>

If you carefully look at the above code, you will see that this page will be dependent on 3 external scripts, which I will describe below. The page itself generates a list of whatever sites you want to include in the push process, within a PHP based Array. The array details important info per site such as the name, svn location, location of the files on the server as well as whatever other notes and additional info you want to provide.

Each time a site is “exported” by clicking the export button, it calls an external script called svnupdate_process.php. This executes the SVN EXPORT command, as well as logging which user requested the action into a simple text based log file. The user is determined by the authentication user that is accessing the page. The htpassword credentials you will be providing to your users should be set per-user so that it can be easier to determine who pushed the code and whatnot.

The other two external scripts are one that will view the log file in an iframe on the same page, as well as a script to extrapolate the pending commits that are in the queue since the LAST code push / svn export. That is really useful, as you can imagine.

Script to view the export log

This script, log.php is used to dump the contents of the log.txt export log file. Very simple

log.php

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>
<head>
        <title>Untitled</title>
</head>

<body>
<b>Development Export Log:</b><hr>
<?
include("./functions.inc.php");
$  logfile = new logfile();
$  logfile->display();
?>
</body>
</html>

Simple, right? The log.php code includes a functions.inc.php file, used for writing and reading the log.txt file. The above code depends on it, as well as the svnupdate_process.php code (described below), in order to log each time someone hits the export code button

functions.inc.php

<?
class logfile {
        function logfile() {
                $  this->filename = "log.txt";
                $  this->Username = $  _SERVER['PHP_AUTH_USER'];
                $  this->logfile = $  this->filename;
        }

        function write($  data) { // write to logfile
                $  handle = fopen($  this->logfile, "a+");
                $  date = date("Y-m-d H:i:s");
                $  IP = getenv('REMOTE_ADDR');
                $  data = "[$  date] {$  this->Username}:{$  IP} - " . $  data . "n";
                $  return = fwrite($  handle, $  data);
                fclose($  handle);
        }

        function display() { // display logfile
                $  handle = fopen($  this->logfile, "a+");
                while(!feof($  handle)) { // Pull lines into array
                        $  lines[] = fgets($  handle, 1024);
                }
                $  count = count($  lines);
                $  count = $  count - 2;
                for($  i=$  count;$  i>=0;$  i--) {
                        echo $  lines[$  i] . "<br>";
                }
                fclose($  handle);
        }
}
?>

The code of the svn export process is handled by the following script below. Again its self explanatory. PHP executes a shell command to export the svn code based on the array variables defined in the very first script. Make sure all the variables match up to whats in svn and the location of the files, and even execute a test run of the command manually with the variables. If there are problems, you can modify the command to pipe the output to a log file for further analysis. Additionally you may need to alter the permissions of the apache user so that the command can be properly run. Avoid setting the apache user to have a shell (big no-no) ,but maybe a nologin shell or something along those lines. Its completely up to you , but be very careful about the choices you make to get it to run properly.

svnupdate_process.php

<b>Update/Status Window</b>
<hr>


<?
include("./functions.inc.php");
$  logfile = new logfile();

if($  _POST['submitbutton']) {

        if($  _POST['revision'] != "") {
                $  revision = "-r ".$  _POST['revision'];
        }

        $  command = "/usr/bin/svn export --force --username svnuser --password 'svnpassword' $  revision --config-dir /tmp ".$  _POST['source']. " " . $  _POST['site']." 2>&1";

        if($  _POST['submitbutton'] == "Export") {
                $  output = shell_exec("umask 022;".$  command);
        }

        echo "<pre>output

”;
$ logtext = “Exported to {$ _POST[‘site’]}”;
$ logfile->write($ logtext);
eaccelerator_clear();
}

?>

Finally the last script will be the script that parses the SVN log output with a date/time range from the last time the export button was pushed, until the current date and time. This will load the output in the same iframe log window on the svn page so the user can see what pending commits are in the code since the last time it was exported. Invaluable information, right?

Note that this has a function to filter out additional illegal characters to avoid cross site scripting injections. This code should be completely 100% restricted from outside public use, however it might be worth it to put this function in the svnupdate_process.php script as well. Can’t be too careful. I thought I’d include it here for you to use.

viewcommit.php

<?

        if(($  _GET['svn'] != "") && ($  _GET['path'] != "") && ($  _GET['name'] != "")) {

        // Cross Site Script  & Code Injection Sanitization
        function xss_cleaner($  input_str) {
        $  return_str = str_replace( array('<',';','|','&','>',"'",'"',')','('), array('&lt;','&#58;','&#124;','&#38;','&gt;','&apos;','&#x22;','&#x29;','&#x28;'), $  input_str );
        $  return_str = str_ireplace( '%3Cscript', '', $  return_str );
        return $  return_str;
        }

        $  xss_path=xss_cleaner($  _GET['path']);
        $  xss_svn=xss_cleaner($  _GET['svn']);
        $  xss_name=xss_cleaner($  _GET['name']);

        echo "<b>Viewing Pending Updates For : ". $  xss_name . "</b>";
        echo "<hr>";

        $  command = "/usr/bin/svn --username svnuser --password 'svnpassword' --config-dir /tmp log " . $  xss_svn . " -r {"`grep "" . $  xss_path . "" log.txt | tail -n 1 | awk -F " " '{printf "%s %s", $  1,$  2}' | sed -e 's/[//g' -e 's/]//g'`"}:{"`date "+%Y-%m-%d %H:%M:%S"`"}";

        $  output = shell_exec("umask 022;".$  command);
        echo "<pre>output

”;
}
else {
echo “No queries passed!”;
}

?>

Lets break down the SVN log command, so you know whats going on. I’m grabbing the SVN site array variables when the “view log” link is clicked on the svn page. I am also parsing the export log text file to get the last entry for the particular site in question, grabbing the date and time.

I am then getting the current date and time to complete the date/time range in the svn log query. The finished query should look something like this :

svn --username svnuser --password 'svnpassword' --config-dir /tmp log svn://svn.server.com -r {"2013-01-01 12:01:00"}:{"2013-02-01 12:01:00"}

Note the files in this post are now on GitHub

The post A Web based system to push your SVN code through development, staging and production environments appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

Web based system to push your GIT code

Posted by: Admin  :  Category: Web Hosting

Hello!

Since posting recently about our Web based SVN push system , we have decided to take what we did there one step further and implement a very similar system for GIT, but with more options!

The web based GIT push system is, as mentioned, very similar to the web based SVN push system, with the exception that you can select branches before exporting the code.

I should stress before continuing that this system is not intended to be publicly visible on a website. Strict access controls need to be implemented in front of this implementation to protect the integrity and protect from malicious users. For example, only making this system available on a Development LAN, or putting it behind an IP restricted firewall, with IP restricted apache/nginx rules, web authentication and SSL will allow for a much more secure implementation of this system. My advice is to always assume everything is vulnerable at any time. Working backwards with that assumption has always been a good policy for me.

First of all the entire solution is available on GitHub for you to preview.

I’ll go through each file individually, briefly explaining what each file does.

index.php
This is a straightforward file. There is a small amount of php code embedded in this file with HTML to present the push page in a simple HTML table. An array is built for all the sites you want to push (in this example case its a Dev and Prod site). The array makes it very easy to add additional sites. Each array propagates a source, destination, site name and site url within.

The only field that is really used is the “pushname” variable in each site array. That variable gets passed to the shell script that actually takes care of the pushing mechanism.

The remaining php code in this file builds a list of sites based on the array, as well as pulling the current branch by running a function included in functions.inc.php that pulls all the branches associated with a repository and saves it to a text file for easy parsing. The other function pulls the last time the site was pushed or “exported”, giving an easy reference when dealing with multiple developers.

It should be noted that it is best to implement apache/nginx web based access on a per-user basis in order to access this page. This is because the index.php file parses the username of who is accessing the site for logging purposes. So every user that needs to access this needs an htpasswd user/password created for them for security and accountability purposes.

functions.inc.php
This file is where many of the functions lie (obviously). There is a crossite scripting function that is used to filter any submit input. I realize this is not very secure, but with the security considerations I mentioned in the beginning of this post, it should suffice. A good systems administrator would implement many hardware, software and intrusion layers to prevent malicious users from injecting content such as snort and mod_security. Nothing beats the security of a completely offline web accessible page on an internal LAN, obviously.

Next we have some functions that grab the branches, get the current branch that the site has been previously pushed on, some log file functions for storing the log file info and writing the log data and displaying it as well. All of these functions are intended to help keep the development process very organized and easy to maintain.

gitupdate_process.php
This file is where the index.php file POSTS the data of the site you want to push. This file receives the data as a $ _POST (with the XSS cleaner function mentioned earlier sanitizing as best as it can) and then passes that variable to the push bash shell script in order to do the actual file synchronization.

It might be possible to do all the file synchronization in php, but I felt that separating the actual git pulling and rsync process into a separate shell script made the process less obfuscated and confusing. The shell script rarely needs to change unless a new site is added obviously.

log.php
This file is simply loaded as an iframe within index.php when someone clicks to view the export log. It parses the log.txt file and displays it. The export log format can be customized obviously, but usually would contain the site name, username who pushed, date and time as well as the branch pushed.

log.txt
This is self explanatory and contains the log information detailed in log.php

push.sh
This is the push bash shell script that gitupdate_process.php calls. Again this can be consolidated to be 100% PHP but I felt segmenting it was a good idea. You can see that the command line arguments are parsed from a $ _POST in gitupdate_process.php and then passed to the shell script as an argument. This is very simple and shouldn’t be too hard to understand. The arguments would basically be the site name ($ 1) and the git branch name that was selected from the dropdown box before hitting the export button ($ 2).

That’s it! This package for GIT has made many developers’ life easier and caused less headaches when diagnosing problems or even rolling back to a stable branch. Keeping a stable and organized development environment is key here, with the security considerations I mentioned earlier being paramount above everything else.

I hope that this script was helpful and would welcome any suggestions to improve it further 🙂

The post Web based system to push your GIT code appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

Web based system to purge multiple Varnish cache servers

Posted by: Admin  :  Category: Web Hosting

Hello!

We have been working with varnish for quite a while. And there is quite a lot of documentation out there already for the different methods for purging cache remotely via Curl, the varnish admin tool sets and other related methods.

We deal with varnish in the Amazon Cloud as well as on dedicated servers. In many cases varnish sits in a pool of servers in the web stack before the web services such as Nginx and Apache. Sometimes purging specific cache urls can be cumbersome when you’re dealing with multiple cache servers.

Depending on the CMS you are using, there is some modules / plugins that are available that offer the ability to purge Varnish caches straight from the CMS, such as the Drupal Purge module.

We have decided to put out a secure, web accessible method for purging Varnish cached objects across multiple varnish servers. As always, take the word “secure” with a grain of salt. The recommended way to publish a web accessible method on apache or nginx that gives the end-user the ability to request cache pages be purged would be to take these fundamentals into consideration :

– Make the web accessible page available only to specific source IPs or subnets
– Make the web accessible page password protected with strong passwords and non-standard usernames
– Make the web accessible page fully available via SSL encryption

On the varnish configuration side of things, with security still in mind, you would have to set up the following items in your config :

ACL

Set up an access control list in varnish that only allows specific source IPs to send the PURGE request. Here is an example of one :

# ACL For purging cache
acl purgers {
        "127.0.0.1";
        "192.168.0.1"/24;
}

vcl_recv / vcl_hit / vcl_miss / vcl_pass

This is self explanatory (I hope). Obviously you would be integrating the following logic into your existing varnish configuration.

sub vcl_recv {
        if (req.request == "PURGE") {
                if (!client.ip ~ purgers) {
                        error 405 "Method not allowed";
                }
                return (lookup);
        }
}

sub vcl_hit {
        if (req.request == "PURGE") {
                purge;
                error 200 "Purged";
        }
}
sub vcl_miss {
        if (req.request == "PURGE") {
                purge;
                error 404 "Not in cache";
        }
}
sub vcl_pass {
        if (req.request == "PURGE") {
                error 502 "PURGE on a passed object";
        }
}

The code itself is available on our GitHub Project page. Feel free to contribute and add any additional functionality.

It should be important to note that what differentiates our solution among the existing ones out there is that our script will manipulate the host headers of the Curl request in order to submit the same hostname / url request across the array of varnish servers. That way the identical request can be received by multiple varnish servers with no local host file editing or anything like that.

There is lots of room for input sanity checks, better input logic and other options to perhaps integrate with varnish more intuitively. Remember this is a starting point, but hopefully it is useful for you!

The post Web based system to purge multiple Varnish cache servers appeared first on Managed WordPress Hosting | Managed VPS Hosting | Stack Star.

Managed WordPress Hosting | Managed VPS Hosting | Stack Star

Will Google Give Ranking Preference to Websites Based On Security, Rather Than Content?

Posted by: Admin  :  Category: Web Hosting

On August 6, 2014, a post on Google’s Online Security Blog announced that the tech juggernaut had recently been experimenting with its search engine ranking algorithms. Google admitted to running tests that factored a site’s level of security into its formulas, giving a slight preference to encrypted results. While strong HTTPS encryption is commonly used to create web pages that ask for personal information such as the shopping carts and check out pages used by online merchants, it is not currently the standard for informational pages that do not involve secure information. If Google continues to prioritize pages that have adopted HTTPS encryption, the practice will affect both web design and online marketing.

 The Test

For Google, choosing to prioritize sites that have adopted HTTPS is only one of its many efforts to promote a secure web. Google’s own services, including Gmail and Drive, rely on strong HTTPS encryption to keep their clients’ personal information safe from identity thieves. Google has also created free resources to help webmasters prevent and fix security breaches.

There is little information available about how many different tests Google performed or how strong a signal they made encrypted connections in those tests. What they will say is that they were pleased with their findings. “We’ve seen positive results,” claims Google’s Security Blog, “so we’re starting to use HTTPS as a ranking signal.” At the moment, Google calls the signal “very lightweight.” It only affects a small number of queries—less than 1%–and is not nearly as important as more traditional signals such as high quality content. 

The Future 

Google is aware that their algorithms have a powerful influence over the way websites are made and maintained, and they anticipate that their decision will encourage more and more webmasters to fully embrace HTTPS. They also understand that this switch will take time. Although Google has currently set HTTPS as a lightweight ranking signal, they suggest that this may change. The more website owners who choose to adopt HTTPS, the stronger the signal may become. 

Best Practices 

Adopting HTTPS can be tricky. In an effort to help webmasters avoid common mistakes when making the switch, Google suggests the following basic tips to get started:

  • Determine the type of certificate you require: multi-domain, single, or wildcard certificate
  • Use 2048-bit key certificates
  • For resources that are on the same secure domain, use relative URLs
  • For all other domains, use protocol relative URLs
  • Visit the Site move article by Google for more information on how to change the address of your website
  • Don’t use robots.txt to block your HTTPS site from crawling
  • Let search engines index your pages where possible. Avoid the noindex robots meta tag

The Reception

Opinions about Google’s decision to make HTTPS a search engine ranking signal have been mixed and are based largely on factors such as cost and speed in addition to security.

Proponents of the decision support Google’s efforts to make the web a safer place. Many also recognize that the new algorithm might also improve search results. For instance, one supporter suggested that many malware sites would be less likely to spend the time and money on encryption. They would therefore drop lower in search results, be visited less often, and be less of a public threat. Another supporter sees Google’s decision as a way to correct a separate practice that lowers a page’s ranking for being slow. HTTPS is a common reason for slightly lower page speeds and has therefore negatively affected rankings in the past. Under the new system, developers will be rewarded instead of punished for being concerned about security.

Many of those who oppose Google’s new practice see HTTPS as an unnecessary expense.  Certain developers are hesitant to invest the time and money necessary to encrypt web pages that have no personal information merely to maintain their search engine ranking. Some even suggest that Google’s decision was as influenced as much by the companies who stand to benefit from the deluge of upgrades as it was by an altruistic desire to keep the Internet safe. The truly cynical take things one step further, pointing out that HTTPS is not perfect and can be breached by a determined hacker “within a $ 10,000 budget and a couple of days.”

How Will the Web Change? 

Although it may seem premature to discuss how such a small part of Google’s current search algorithm will affect the web, it is unwise to overlook Google’s influence on the Internet. Not everyone approves of Google’s decision, yet most would agree that it will have a noticeable impact on web development in the coming years, particularly if and when the signal in question is strengthened.

More and more webmasters will embrace HTTPS. Websites that currently use HTTP will be upgraded and new websites will be encrypted from the beginning. Encryption is a sophisticated technique, and widespread use of this and other advanced web development strategies will challenge webmasters to evolve along with a constantly growing industry.

Savvy businesses will invest more into web development because they know how important search engine rankings can be for their bottom lines. These businesses will also factor encryption into a comprehensive marketing strategy while simultaneously doing their part to keep the web as safe as possible, precisely as Google intended.

Some strategists worry that prioritizing a page’s security features might allow content to suffer. They fear that if there is less incentive to make a website engaging and informative, businesses will forgo these details altogether. While Google may increase the weight encryption carries in its algorithm over time, it is extremely unlikely that security will ever trump content. On the contrary, a more sophisticated and secure web will in turn be better able to support more elaborate, educational, and entertaining content than ever before.

At Hanei Marketing, we keep a close eye on technology. We’re aware of the trends that can put you and your company ahead of the competition. If you are interested in a company that can develop an advanced b2b marketing strategy for your business, fill out our online form today and a representative from our firm will contact you shortly.

Top image ©GL Stock Images

whg_banner.new.10k

Web Hosting Geeks’ Blog