Posts Tagged ‘PHP’

 

Outputs the time in seconds that it takes for a PHP page to load.

 <!--?php 
 
// Insert this block of code at the very top of your page: 
 
$time = microtime(); 
$time = explode(" ", $time); 
$time = $time[1] + $time[0]; 
$start = $time; 
 
// Place this part at the very end of your page 
 
$time = microtime(); 
$time = explode(" ", $time); 
$time = $time[1] + $time[0]; 
$finish = $time; 
$totaltime = ($finish - $start); 
printf ("This page took %f seconds to load.", $totaltime); 
 
?>  

In facebook stream you’ll see the time period at the bottom of the stream. For example: 4 minutes ago, 2 days ago, 3 weeks ago…. In our recent project we have to show similar time fashion for our application’s activity stream. So I write a function to retrieve the time duration.

From twitter xml response got the date of the tweet. From the date i converted it into seconds from this method

 date_default_timezone_set('GMT');
 $tz = new DateTimeZone('Asia/Colombo');
 $datetime = new DateTime($status->created_at); //get the tweet created date send to DateTime php function 
 $datetime->setTimezone($tz);
 $time_display = $datetime->format('D, M jS g:ia T');
 $d = strtotime($time_display); //convert date string to second integer

After getting the data I just pass the created value in my function. Here is the function:

/*
 * @method: getTimeDuration
 * @param: unix timestamp
 * @return: duration of minutes, hours, days, weeks, months, years
 */
 function getTimeDuration($unixTime) {
 $period    =   '';
 $secsago   =   time() - $unixTime;
 
 if ($secsago < 60){
 $period = $secsago == 1 ? 'about a second ago'     : 'about '.$secsago . ' seconds ago';
 }
 else if ($secsago < 3600) {
 $period    =   round($secsago/60);
 $period    =   $period == 1 ? 'about a minute ago' : 'about '.$period . ' minutes ago';
 }
 else if ($secsago < 86400) {
 $period    =   round($secsago/3600);
 $period    =   $period == 1 ? 'about an hour ago'   : 'about '.$period . ' hours ago';
 }
 else if ($secsago < 604800) {
 $period    =   round($secsago/86400);
 $period    =   $period == 1 ? 'about a day ago'    : 'about '.$period . ' days ago';
 }
 else if ($secsago < 2419200) {
 $period    =   round($secsago/604800);
 $period    =   $period == 1 ? 'about a week ago'   : 'about '.$period . ' weeks ago';
 }
 else if ($secsago < 29030400) {
 $period    =   round($secsago/2419200);
 $period    =   $period == 1 ? 'about a month ago'   : 'about '.$period . ' months ago';
 }
 else {
 $period    =   round($secsago/29030400);
 $period    =   $period == 1 ? 'about a year ago'    : 'about '.$period . ' years ago';
 }
 return $period;
 
 }

Then in the view files I showed the data as:

<div class="period">
 <?=getTimeDuration($created);?> ago
</div>
If you need similar task instead of writing a new function again, you can use the above function.

 

I’ve discovered a huge drawback to the Twitter messaging system: it does not store links. The Twitter site itself will identify URL’s in messages and convert them into clickable links for you automatically. But the magic ends at Twitter’s borders; anyone who wants to do the same on their site is on their own.

So I consulted the almighty Google. I found plenty of raw regex, javascript, and Twitter-focused discussions on the matter, but I found the offered solutions and tips lacking. I wanted to do this up right, transparently via PHP in the background. No JS required.

Finally, I found a small PHP script that accomplished what I needed. Here’s a renamed version—all code intact—that will find and convert any well-formed URL into a clickable <a> tag link.

function linkify( $text ) {
  $text = preg_replace( '/(?!<\S)(\w+:\/\/[^<>\s]+\w)(?!\S)/i', '<a href="$1" target="_blank">$1</a>', $text );
  $text = preg_replace( '/(?!<\S)#(\w+\w)(?!\S)/i', '<a href="http://twitter.com/search?q=#$1" target="_blank">#$1</a>', $text );
  $text = preg_replace( '/(?!<\S)@(\w+\w)(?!\S)/i', '@<a href="http://twitter.com/$1" target="_blank">$1</a>', $text );
  return $text;
}

Copy that into your code, then run your text containing unlinked URL’s through it. :

    <li><?php echo linkify($status->text) . '<br />' . $time_display; ?></li>

I am not a fan of social networking or so-called lifestreaming. I think it’s a BS excuse to fiddle on your computer more. Instead of telling everyone where you are and what you’re doing, go out and meet some friends for a drink.

However I did find a practical use for Twitter in a recent issue of php|architect (Twitter as a Development Tool by Sam McCallum). The article discussed using Twitter as an automated logger, where a program would make posts to a Twitter account based on system actions (i.e. log in/out, create accounts, etc.).

I decided to turn the idea around a bit and use Twitter as an activity log to chronicle my development work on a new project. Think SVN log comments without the repository. The site itself is currently a simple placeholder page, so Twitter updates make an easy way to keep a website fresh while building out the service that will eventually reside there. It also engages the users that wind up looking at the site, letting them know that it might be something of interest to them. That’s to say nothing of any SEO or attention-grabbing effects that may result from having a Twitter stream.

Given the rabidity surrounding said scoial networking silliness, I thought that finding a suitable plug ‘n play solution to this would be easy. Surprisingly (or perhaps unsurprisingly) many of the Twitter scripts I found were plain garbage. The following code was put together by sifting through what I found and putting the best working bits together. So if this sounds interesting, or if you were also frustrated with the plethora of crappy Twitter code, here’s how you can easily display your Twitter updates on any site using PHP.

First, grab this function…

function twitter_status($twitter_id, $hyperlinks = true) {
  $c = curl_init();
  curl_setopt($c, CURLOPT_URL, "http://twitter.com/statuses/user_timeline/$twitter_id.xml");
  curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
  curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 3);
  curl_setopt($c, CURLOPT_TIMEOUT, 5);
  $response = curl_exec($c);
  $responseInfo = curl_getinfo($c);
  curl_close($c);
  if (intval($responseInfo['http_code']) == 200) {
    if (class_exists('SimpleXMLElement')) {
      $xml = new SimpleXMLElement($response);
      return $xml;
    } else {
      return $response;
    }
  } else {
    return false;
  }
}

I’m not going to discuss the various cURL options here or how Twitter uses cURL, as its outside the scope of our discussion here. If you’re lost or curious, you can read up on the cURL library, cURL in PHP, and/or the Twitter API.

As its name implies, twitter_status() will connect to Twitter and grab the timeline for the Twitter account identified by the $twitter_id. The $twitter_id is a unique number assigned to every Twitter account. You can find yours by visiting your profile page and examining the RSS link at the bottom left of the page. The URL will look like this:

http://twitter.com/statuses/user_timeline/12345678.rss

That 8-digit number at the end is your ID. Grab it and pass it as the lone argument to twitter_status(). Note that, as long as your Twitter profile is public, you do not need to pass any credentials to retrieve a user timeline. The API makes this information available to anyone, anywhere. There are more options that can be accessed through the user_timeline() function, if you’re curious.

The next step is to actually use the returned data, which comes in one of two forms: a SimpleXML object, or a raw XML document. SimpleXML is preferred because it’s a PHP object, and allows you access to all the usual object manipulation. Very easy. SimpleXML was added to PHP starting with version 5. The PHP manual has all the necessary details on SimpleXML.

The following code example assumes you’re using SimpleXML. Here I am taking the first five results and putting them in an HTML list. I’ll include a link to view the profile, as well as an error message in case Twitter is suffering from one of its famous fail-whale spasms.

 

<ul>
<?php
if ($twitter_xml = twitter_status('12345678')) {
  foreach ($twitter_xml->status as $key => $status) {
?>
  <li><?php echo $status->text; ?></li>
<?php
    ++$i;
    if ($i == 5) break;
  }
?>
  <li><a href="http://twitter.com/YOUR_PROFILE_HERE">more...</a></li>
<?php
} else {
  echo 'Sorry, Twitter seems to be unavailable at the moment...again...';
}
?>
</ul>

With all the fancy cURL-based API’s out there these days (Facebook and Twitter immediately come to mind), using cURL to directly access and manipulate data is becoming quite common. However like all programming, there’s always the chance for an error to occur, and thus these calls must be immediately followed by error checks to ensure everything went as planned.

Most decent API’s will return their own custom errors when an internal problem occurs, but that does not account for issues dealing directly with the connection. So before your application goes looking for API-based errors, they should first check the returned HTTP status code to ensure the connection itself went well.

For example, Twitter-specific error messages are always paired with a “400 Bad Request” status. The message is of course helpful, but it’s far easier (as you’ll see) to find the status code from the response headers and then code for the exceptions as necessary, using the error text for logging and future debugging.

Anyway, the HTTP status code, also called the “response code,” is a number that corresponds with the result of an HTTP request. Your browser gets these codes every time you access a webpage, and cURL calls are no different. The following codes are the most common (excerpted from the Wikipedia entry on the subject)…

  • 200 OK
    Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.
  • 301 Moved Permanently
    This and all future requests should be directed to the given URI.
  • 400 Bad Request
    The request contains bad syntax or cannot be fulfilled.
  • 401 Unauthorized
    Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource.
  • 403 Forbidden
    The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
  • 404 Not Found
    The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
  • 500 Internal Server Error
    A generic error message, given when no more specific message is suitable.

So now that we know what we’re looking for, how do we go about actually getting them? Fortunately, PHP’s cURL support makes performing these checks pretty easy, they just don’t make the process plain. We need a function called curl_getinfo(). It returns an array full of useful information, but we only need to know the status number. Fortunately, we can set the arguments so that we only get this number back, like so…

// must set $url first. Duh...
$http = curl_init($url);
// do your curl thing here
$result = curl_exec($http);
$http_status = curl_getinfo($http, CURLINFO_HTTP_CODE);
echo $http_status;

curl_getinfo() returns data for the last curl request, so you must execute the cURL call first, then call curl_getinfo(). The key is the second argument; the predefined constant CURLINFO_HTTP_CODE tells the function to forego all the extra data, and just return the HTTP code as a string.

Echoing out the variable $http_status gets us the status code number, typically one of those outlined above.

It’s a common problem with no single right answer: extract the top domain (e.g. example.com) from a given string, which may or may not be a valid URL. I had need of such functionality recently and found answers around the web lacking. So if you ever “just wanted the domain name” out of a string, give this a shot…

<?php
function get_top_domain($url, $remove_subdomains = 'all') {
  $host = strtolower(parse_url($url, PHP_URL_HOST));
  if ($host == '') $host = $url;
  switch ($remove_subdomains) {
    case 'www':
      if (strpos($host, 'www.') === 0) {
        $host = substr($host, 4);
      }
      return $host;
    case 'all':
    default:
      if (substr_count($host, '.') > 1) {
        preg_match("/^.+\.([a-z0-9\.\-]+\.[a-z]{2,4})$/", $host, $host);
        if (isset($host[1])) {
          return $host[1];
        } else {
          // not a valid domain
          return false;
        }
      } else {
        return $host;
      }
    break;
  }
}// some examples
var_dump(get_top_domain('http://www.validurl.example.com/directory', 'all'));
var_dump(get_top_domain('http://www.validurl.example.com/directory', 'www'));
var_dump(get_top_domain('domain-string.example.com', 'all'));
var_dump(get_top_domain('domain-string.example.com/nowfails', 'all'));
var_dump(get_top_domain('finds the domain url.example.com', 'all'));
var_dump(get_top_domain('12.34.56.78', 'all'));
?>

Most of the examples are simply proofs, but I want to draw attention to the string in example #4, 'domain-string.example.com/nowfails'. This is not a valid URL, so the call to parse_url() fails, forcing the script to use the entire original string. In turn, the path part of the string causes the regex to break, causing a complete failout (return false;).

Is there a way to account for this? Surely, however I’m not about to tap that massive keg of exceptions (i.e. just a slash, slash plus path, slash plus another domain in a human-readable string, etc).

No regex for validating URL’s or email addresses is ever perfect; the “strict” RFC requirements are too damn broad. So I did what I always do: chose “what works” over “what’s technically right.” This one requires any 2-4 characters for a the top level domain (TLD), so it doesn’t allow for the .museum TLD, and doesn’t check to see if the provided TLD is actually valid. If you need to do further verification, that’s on you. Here’s the current full list of valid TLD’s provided by the IANA.

If you need to modify the regex at all, I highly recommend you read this article about email address regex first for two reasons:

  1. There’s a ton of overlap between email and URL regex matching
  2. It will point out all the gotcha’s in your “better” regex theory that you didn’t think about

VTC (Virtual Training Company) | 581 Mb

In Real World PHP Programming: The Basics, VTC Author Mike Morton introduces PHP programming in a fashion that is immediately applicable to experienced programmers, and new programmers alike. This programming title does not focus on getting certified in PHP, but rather focuses on the application of PHP in everyday programming, including the proper terminology as well as learning PHP slang. Starting with the absolute basics of PHP types and statements, Mike progresses you through conditional and loops, MySQL, and into advanced topics such as functions and session management. With working examples, and application of what you are learning shown throughout, Mike makes learning PHP an easy and enjoyable endeavour.

(more…)