Archive for March, 2011

Mozilla Firefox word mark. Guestimated clear s...

Image via Wikipedia

Mozilla hasn’t officially released a stable version of Firefox 4 yet, but some folks have spotted it on Mozilla’s Fastbull (FTP) servers.

You can immediately download Mozilla Firefox 4 for Windows, Mac and Linux (32/64 bit) here. That, for those who don’t wish to wait until Mozilla releases an Personal Package Archive (PPA).

So apparently, Mozilla Firefox 4 is almost here. The new version of open source browser packs some exciting new features, including tabs on top, new consolidated menu button, and App tabs.

You now switch between tabs just by typing a tab’s name or link into the Address (URL) bar, thus preventing duplicate tabs.

Similar to Windows 7, Firefox 4 allows you to pin the most frequent websites you visit, in App Tabs. So, when you exit your browser, and start it later on, these tabs are automatically loaded for you.

Other features include syncing even your settings and passwords, alongside bookmarks, history, open tabs and others across multiple devices (incl. mobile); organizing tabs into groups, and a new tab for managing your add-ons.

Firefox 4 uses the new JägerMonkey JavaScript engine, faster graphics and other performance improvements, faster start up and page load times.

Firefox users have high hopes from Firefox 4 final version as it has spent considerable time under development. So if you can’t wait till the official announcement, download Firefox 4 from the links given below.

FTP links for Firefox 4

Download Firefox 4 for Windows

Download Firefox 4 for Mac OS X

Download Firefox 4 for Linux (x86, x64)

In facebook stream you’ll see the time period at the bottom of the stream. For example: 4 minutes ago, 2 days ago, 3 weeks ago…. In our recent project we have to show similar time fashion for our application’s activity stream. So I write a function to retrieve the time duration.

From twitter xml response got the date of the tweet. From the date i converted it into seconds from this method

 date_default_timezone_set('GMT');
 $tz = new DateTimeZone('Asia/Colombo');
 $datetime = new DateTime($status->created_at); //get the tweet created date send to DateTime php function 
 $datetime->setTimezone($tz);
 $time_display = $datetime->format('D, M jS g:ia T');
 $d = strtotime($time_display); //convert date string to second integer

After getting the data I just pass the created value in my function. Here is the function:

/*
 * @method: getTimeDuration
 * @param: unix timestamp
 * @return: duration of minutes, hours, days, weeks, months, years
 */
 function getTimeDuration($unixTime) {
 $period    =   '';
 $secsago   =   time() - $unixTime;
 
 if ($secsago < 60){
 $period = $secsago == 1 ? 'about a second ago'     : 'about '.$secsago . ' seconds ago';
 }
 else if ($secsago < 3600) {
 $period    =   round($secsago/60);
 $period    =   $period == 1 ? 'about a minute ago' : 'about '.$period . ' minutes ago';
 }
 else if ($secsago < 86400) {
 $period    =   round($secsago/3600);
 $period    =   $period == 1 ? 'about an hour ago'   : 'about '.$period . ' hours ago';
 }
 else if ($secsago < 604800) {
 $period    =   round($secsago/86400);
 $period    =   $period == 1 ? 'about a day ago'    : 'about '.$period . ' days ago';
 }
 else if ($secsago < 2419200) {
 $period    =   round($secsago/604800);
 $period    =   $period == 1 ? 'about a week ago'   : 'about '.$period . ' weeks ago';
 }
 else if ($secsago < 29030400) {
 $period    =   round($secsago/2419200);
 $period    =   $period == 1 ? 'about a month ago'   : 'about '.$period . ' months ago';
 }
 else {
 $period    =   round($secsago/29030400);
 $period    =   $period == 1 ? 'about a year ago'    : 'about '.$period . ' years ago';
 }
 return $period;
 
 }

Then in the view files I showed the data as:

<div class="period">
 <?=getTimeDuration($created);?> ago
</div>
If you need similar task instead of writing a new function again, you can use the above function.

 

I’ve discovered a huge drawback to the Twitter messaging system: it does not store links. The Twitter site itself will identify URL’s in messages and convert them into clickable links for you automatically. But the magic ends at Twitter’s borders; anyone who wants to do the same on their site is on their own.

So I consulted the almighty Google. I found plenty of raw regex, javascript, and Twitter-focused discussions on the matter, but I found the offered solutions and tips lacking. I wanted to do this up right, transparently via PHP in the background. No JS required.

Finally, I found a small PHP script that accomplished what I needed. Here’s a renamed version—all code intact—that will find and convert any well-formed URL into a clickable <a> tag link.

function linkify( $text ) {
  $text = preg_replace( '/(?!<\S)(\w+:\/\/[^<>\s]+\w)(?!\S)/i', '<a href="$1" target="_blank">$1</a>', $text );
  $text = preg_replace( '/(?!<\S)#(\w+\w)(?!\S)/i', '<a href="http://twitter.com/search?q=#$1" target="_blank">#$1</a>', $text );
  $text = preg_replace( '/(?!<\S)@(\w+\w)(?!\S)/i', '@<a href="http://twitter.com/$1" target="_blank">$1</a>', $text );
  return $text;
}

Copy that into your code, then run your text containing unlinked URL’s through it. :

    <li><?php echo linkify($status->text) . '<br />' . $time_display; ?></li>

I am not a fan of social networking or so-called lifestreaming. I think it’s a BS excuse to fiddle on your computer more. Instead of telling everyone where you are and what you’re doing, go out and meet some friends for a drink.

However I did find a practical use for Twitter in a recent issue of php|architect (Twitter as a Development Tool by Sam McCallum). The article discussed using Twitter as an automated logger, where a program would make posts to a Twitter account based on system actions (i.e. log in/out, create accounts, etc.).

I decided to turn the idea around a bit and use Twitter as an activity log to chronicle my development work on a new project. Think SVN log comments without the repository. The site itself is currently a simple placeholder page, so Twitter updates make an easy way to keep a website fresh while building out the service that will eventually reside there. It also engages the users that wind up looking at the site, letting them know that it might be something of interest to them. That’s to say nothing of any SEO or attention-grabbing effects that may result from having a Twitter stream.

Given the rabidity surrounding said scoial networking silliness, I thought that finding a suitable plug ‘n play solution to this would be easy. Surprisingly (or perhaps unsurprisingly) many of the Twitter scripts I found were plain garbage. The following code was put together by sifting through what I found and putting the best working bits together. So if this sounds interesting, or if you were also frustrated with the plethora of crappy Twitter code, here’s how you can easily display your Twitter updates on any site using PHP.

First, grab this function…

function twitter_status($twitter_id, $hyperlinks = true) {
  $c = curl_init();
  curl_setopt($c, CURLOPT_URL, "http://twitter.com/statuses/user_timeline/$twitter_id.xml");
  curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
  curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 3);
  curl_setopt($c, CURLOPT_TIMEOUT, 5);
  $response = curl_exec($c);
  $responseInfo = curl_getinfo($c);
  curl_close($c);
  if (intval($responseInfo['http_code']) == 200) {
    if (class_exists('SimpleXMLElement')) {
      $xml = new SimpleXMLElement($response);
      return $xml;
    } else {
      return $response;
    }
  } else {
    return false;
  }
}

I’m not going to discuss the various cURL options here or how Twitter uses cURL, as its outside the scope of our discussion here. If you’re lost or curious, you can read up on the cURL library, cURL in PHP, and/or the Twitter API.

As its name implies, twitter_status() will connect to Twitter and grab the timeline for the Twitter account identified by the $twitter_id. The $twitter_id is a unique number assigned to every Twitter account. You can find yours by visiting your profile page and examining the RSS link at the bottom left of the page. The URL will look like this:

http://twitter.com/statuses/user_timeline/12345678.rss

That 8-digit number at the end is your ID. Grab it and pass it as the lone argument to twitter_status(). Note that, as long as your Twitter profile is public, you do not need to pass any credentials to retrieve a user timeline. The API makes this information available to anyone, anywhere. There are more options that can be accessed through the user_timeline() function, if you’re curious.

The next step is to actually use the returned data, which comes in one of two forms: a SimpleXML object, or a raw XML document. SimpleXML is preferred because it’s a PHP object, and allows you access to all the usual object manipulation. Very easy. SimpleXML was added to PHP starting with version 5. The PHP manual has all the necessary details on SimpleXML.

The following code example assumes you’re using SimpleXML. Here I am taking the first five results and putting them in an HTML list. I’ll include a link to view the profile, as well as an error message in case Twitter is suffering from one of its famous fail-whale spasms.

 

<ul>
<?php
if ($twitter_xml = twitter_status('12345678')) {
  foreach ($twitter_xml->status as $key => $status) {
?>
  <li><?php echo $status->text; ?></li>
<?php
    ++$i;
    if ($i == 5) break;
  }
?>
  <li><a href="http://twitter.com/YOUR_PROFILE_HERE">more...</a></li>
<?php
} else {
  echo 'Sorry, Twitter seems to be unavailable at the moment...again...';
}
?>
</ul>

With all the fancy cURL-based API’s out there these days (Facebook and Twitter immediately come to mind), using cURL to directly access and manipulate data is becoming quite common. However like all programming, there’s always the chance for an error to occur, and thus these calls must be immediately followed by error checks to ensure everything went as planned.

Most decent API’s will return their own custom errors when an internal problem occurs, but that does not account for issues dealing directly with the connection. So before your application goes looking for API-based errors, they should first check the returned HTTP status code to ensure the connection itself went well.

For example, Twitter-specific error messages are always paired with a “400 Bad Request” status. The message is of course helpful, but it’s far easier (as you’ll see) to find the status code from the response headers and then code for the exceptions as necessary, using the error text for logging and future debugging.

Anyway, the HTTP status code, also called the “response code,” is a number that corresponds with the result of an HTTP request. Your browser gets these codes every time you access a webpage, and cURL calls are no different. The following codes are the most common (excerpted from the Wikipedia entry on the subject)…

  • 200 OK
    Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.
  • 301 Moved Permanently
    This and all future requests should be directed to the given URI.
  • 400 Bad Request
    The request contains bad syntax or cannot be fulfilled.
  • 401 Unauthorized
    Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource.
  • 403 Forbidden
    The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
  • 404 Not Found
    The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
  • 500 Internal Server Error
    A generic error message, given when no more specific message is suitable.

So now that we know what we’re looking for, how do we go about actually getting them? Fortunately, PHP’s cURL support makes performing these checks pretty easy, they just don’t make the process plain. We need a function called curl_getinfo(). It returns an array full of useful information, but we only need to know the status number. Fortunately, we can set the arguments so that we only get this number back, like so…

// must set $url first. Duh...
$http = curl_init($url);
// do your curl thing here
$result = curl_exec($http);
$http_status = curl_getinfo($http, CURLINFO_HTTP_CODE);
echo $http_status;

curl_getinfo() returns data for the last curl request, so you must execute the cURL call first, then call curl_getinfo(). The key is the second argument; the predefined constant CURLINFO_HTTP_CODE tells the function to forego all the extra data, and just return the HTTP code as a string.

Echoing out the variable $http_status gets us the status code number, typically one of those outlined above.

It’s a common problem with no single right answer: extract the top domain (e.g. example.com) from a given string, which may or may not be a valid URL. I had need of such functionality recently and found answers around the web lacking. So if you ever “just wanted the domain name” out of a string, give this a shot…

<?php
function get_top_domain($url, $remove_subdomains = 'all') {
  $host = strtolower(parse_url($url, PHP_URL_HOST));
  if ($host == '') $host = $url;
  switch ($remove_subdomains) {
    case 'www':
      if (strpos($host, 'www.') === 0) {
        $host = substr($host, 4);
      }
      return $host;
    case 'all':
    default:
      if (substr_count($host, '.') > 1) {
        preg_match("/^.+\.([a-z0-9\.\-]+\.[a-z]{2,4})$/", $host, $host);
        if (isset($host[1])) {
          return $host[1];
        } else {
          // not a valid domain
          return false;
        }
      } else {
        return $host;
      }
    break;
  }
}// some examples
var_dump(get_top_domain('http://www.validurl.example.com/directory', 'all'));
var_dump(get_top_domain('http://www.validurl.example.com/directory', 'www'));
var_dump(get_top_domain('domain-string.example.com', 'all'));
var_dump(get_top_domain('domain-string.example.com/nowfails', 'all'));
var_dump(get_top_domain('finds the domain url.example.com', 'all'));
var_dump(get_top_domain('12.34.56.78', 'all'));
?>

Most of the examples are simply proofs, but I want to draw attention to the string in example #4, 'domain-string.example.com/nowfails'. This is not a valid URL, so the call to parse_url() fails, forcing the script to use the entire original string. In turn, the path part of the string causes the regex to break, causing a complete failout (return false;).

Is there a way to account for this? Surely, however I’m not about to tap that massive keg of exceptions (i.e. just a slash, slash plus path, slash plus another domain in a human-readable string, etc).

No regex for validating URL’s or email addresses is ever perfect; the “strict” RFC requirements are too damn broad. So I did what I always do: chose “what works” over “what’s technically right.” This one requires any 2-4 characters for a the top level domain (TLD), so it doesn’t allow for the .museum TLD, and doesn’t check to see if the provided TLD is actually valid. If you need to do further verification, that’s on you. Here’s the current full list of valid TLD’s provided by the IANA.

If you need to modify the regex at all, I highly recommend you read this article about email address regex first for two reasons:

  1. There’s a ton of overlap between email and URL regex matching
  2. It will point out all the gotcha’s in your “better” regex theory that you didn’t think about

Recently,  I went through a weird problem when i try to update my friends Nokia 5800 Music Express via Nokia Software Updater Application. The message that i got in the Nokia Software Updater application is below.

 

Steps to solve this problem :

  1. Go to Control Panel > Administrative Tools > Services.
  2. Find services “Internet Connection Sharing” and “Windows Firewall”
  3. Stop both services then start Nokia Software Updater.

 

Cheers   🙂  !!!

 

Apple‘s CEO Steve Jobs is ranked as the world’s 110th richest person with net worth of $8.3 billion according to Forbes annual list. Last year, Jobs’ net worth was up from $5.5 billion and 136th place so I think he made a great milestone in one year.



It’d be interesting if we take a look at Facebook‘s Mark Zuckerberg wealth which is with net worth of $13.5 billion. Zuckerberg is ranked as the 52nd richest person in the world.

DAVOS-KLOSTERS/SWITZERLAND, 30JAN09 - Mark Zuc...

Image via Wikipedia

Image representing Steve Jobs as depicted in C...

Image via CrunchBase

 

For those who wants to know more, Mexican telecommunications tycoon Carlos Slim Helu holds the top spot in the list followed by Microsoft founder Bill gates with net worth of $74 billion and $56 billion respectively.

Firefox 4 RC is out!

Posted: March 11, 2011 in News
Tags: ,
Mozilla Firefox word mark. Guestimated clear s...

Image via Wikipedia

 

 

 

The wait is over … Firefox 4 Release Candidate (RC) is available for download for Windows, Mac and Linux.

 

 

 

Here’s the official announcement:

Mozilla Firefox 4 for Windows, Mac and Linux has exited the beta cycle and is now available as a release candidate in more than 70 languages. The millions of users testing Firefox 4 will be automatically updated to this version and will join our Mozilla QA team in validating the new features, enhanced performance and stability and HTML5 capabilities in Firefox 4. Testers are encouraged to check out the Web O’ Wonder in order to see the future of the Web with cutting edge demos that showcase the incredible online experiences developers can now create and users can experience. Developers can submit their own demos to the Mozilla Developer Network Demo Studio.

The important stuff from the release notes:

This Firefox 4 RC is considered to be stable and safe to use for daily web browsing, though the features and content may change before the final product release. At this time many Add-ons may not yet have been tested by their authors to ensure that they are compatible with this release. If you wish to help test Add-on compatibility, please install the Add-on Compatibility Reporter – your favorite Add-on author will appreciate it!

Download available here.

Release notes here.

Changelist here.

Inernet Explorer 9 RC (Release Candidate) was released on 10th February, now Microsoft is ready to release the final version RTM of Internet Explorer 9 on 14th March at the SXSW conference.

Microsoft has alre

Internet Explorer 9

Image via Wikipedia

ady said that Internet Explorer 9 RC will automatically update to the Final version of IE9 (internet Explorer 9). But now it’s unsure whether Microsoft will provide this update on 14th March at the SXSW conference or if we’ll have to wait for the next Patch Tuesday. In either case direct download should be available around 9pm on 14th March.

On the other hand Microsoft Developer Network India, has tweeted that final Version of Internet Explorer 9 RTM will release on 24th March at Tech.Ed in India.
Microsoft might launch Internet Explorer 9 on 14th march and may officially launch it in India on 24th March. So Internet Explorer 9 Final version RTM is expected to release on 14th March, but is confirmed for Release on 24th March.