today they rolled out a long-awaited feature: automatic link shortening. Now, whenever you type a link into the Twitter webapp, it will shorten anything over 11 characters using their t.co service. What’s really great is that your original link (or a truncated version of it) will still show up in tweets, so people still know where they’re going (and don’t have to worry about phishing attacks). Sadly, if you want to use anything other than t.co (like bit.ly, which provides statistics and other features), you’ll still have to shorten links from that service’s home page. Still, if you still use the Twitter webapp instead of a dedicated client, this is a huge convenience. Note that the feature still seems to be rolling out, and you might not see it on your account just yet—but you should soon.

[via Twitter Blog]

The most observant of you surely remember the words of Steve Jobs when, about a year ago, presented FaceTime, the new service from Apple that make video calls from the device IOS, at first only fourth-generation iPhone. With a little ‘amazement, it turned out, indeed it was the same Steve Jobs to announce that FaceTime would have been a feature WiFi only “throughout 2010. At the end of the fifth month of 2011, one wonders if the new iOS 5 things will change.

Certainly almost no one expected that FaceTime also on the 3G network could be introduced without a major upgrade of IOS, which could then update to match its new5.0 firmware that Apple will be presented June 6 on the stage of the Moscone Center during the Keynote d ‘ opening of the WWDC 2011.

Could FaceTime on 3G as one of five new IOS? Very likely. Technically, Apple could implement this function from the beginning. However, to ensure that users always have as an efficient and precise, he preferred to wait and launch FaceTime only WiFi to test primarily the use by users and also to assess the prevalence (FaceTime could initially be used only between 4 iPhone , then also on your Mac, iPod touch and fourth-generation iPad 2), which was limited by its need to be connected to a WiFi network for video calls another contact.

Apple has nevertheless proved to be very careful with regard to this technologyhas been improved with every new iPhone firmware update, both internally and through integration with other applications on Mac and implementing the new cam in HD on newer models models . Precisely for this reason, given also the good results obtained by FaceTime on 3G iPhone on jailbroken (thanks to programs such as 3G Unrestrictor available via Cydia), there would be no surprise that in 2011, just 5 with IOS, Apple will allow users to FaceTime is used in WiFi on 3G.

If this happens, Apple users could hypothetically video calls to each other completely free of charge. However, it remains to understand what might be the response of the Italian telephone operators faced with a similar solution that both recalls the VoIP, certainly not loved by some, and what the actual traffic flow data users.

Recalling the words of Steve Jobs ” FaceTime will only work on WiFi for 2010 “we are quite confident that Apple can implement iOS 5 video in the long-awaited 3G iPhone and iPad 2.

Given the fact that Google Chrome share is continuing to grow a lot, you may wonder about the reasoning for this article title. The fact that Chrome is continuing to get more popular for Web browsing, it does not mean that when Web developers are working on their projects they still use Chrome.

For me that is not the case. Often when I am doing Web development work, I feel the need to switch to Firefox for the reasons that I am listing below.

Keeping in mind that this article is not exactly a rant against Google. Google gaves us, Web developers, a lot to be thankful to them, but when comes to Chrome there is still a lot to desire.

I decided to write this post now in the hope that maybe somebody at Google reads it and does something to address these issues that more than often really upsets many of us, as we do not want to be switching browsers all the time.

1. HTML viewing

When the code of our sites has bugs, often it generates HTML code that is incorrect or even invalid. So we need to examine the HTML code to help figuring what is wrong.

Firefox has this awesome feature that lets you select a portion of a page and it shows the exact HTML that corresponds to selected page portion. There is no such feature in Chrome.

The best you have is an Inspect Element feature that lets you find the HTML code for the page element under your mouse pointer. It is not the same thing. If I selected a region of the page, I would like to view the whole region HTML, not just a single element.

Another annoyance is that Chrome tries to beautify the HTML code. This means that if you have malformed HTML, you will not see where it is malformed, as Chrome will show you a beautified version of the HTML code after have been already fixed. I wish there was an option to disable beautification.

2. HTML Validation

Another great feature of Firefox is the possibility to show you any HTML validation errors that you may have. Actually, this is a feature provided by the HTML Validator extension.

I tried several extensions for the same purpose in Chrome but nothing was nearly as good. Some only used the W3C validation service passing the page URL, which is not good because when you are developing a non-public page, the W3C service cannot access it, neither can access it as an eventually logged user of the site you are developing.

Other extensions tried to copy the currently loaded page and pass it to the validator service but none showed the eventually invalid HTML as the Firefox HTML Validator extension does.

Also this Firefox extension does not rely on an external validation service. This means that you can validate your pages even if you currently do not have Internet Access.

So, for all these reasons, an equivalent version of Firefox HTML Validator extension is seriously needed for Chrome.

3. Disable JavaScript

Sometimes you need to test your site with JavaScript disabled. The only way to disable it in Chrome is going to preferences and disable it there. This is a real drag.

In Firefox you can use the Web Developer extension by Chris Pederick to add a button to the browser toolbar to quickly disable and re-enable JavaScript any time you want.

For Chrome there is also the Web Developer extension by the same developer but it does not provide a means to disable JavaScript.

The problem is due to a limitation of the Chrome API exposed to extensions. It does not provide a way to disable JavaScript from extensions.

There is a feature request for the Chromium project to implement the necessary support to disable JavaScript. The feature implementation was even assigned about 1 year ago to be implemented, but it never happened. Oddly users were disallowed to post further comments to that feature request.

4. Empty the browser cache

Sometimes you need to force the cache of the browser to be emptied, so fresh content is retrieved from the server of the site you are developing.

It is the same limitation as of disabling JavaScript. You cannot do it from an extension. You need to do it by going to the Chrome preferences.

There is also a feature request to enable support to empty the browser cache from an extension. This one was requested more than 2 years ago, but only 3 months ago it was assigned to be implemented.

5. Switching the browser user agent identification

Sometimes you need to access your site in development pretending to be using a different browser, so you can check if your site adapts to the current browser as you expect.

For instance, if you are serving an RSS feed to be handled by Feedburner, you need to redirect all browsers to the Feedburner feed URL, unless the current user agent is Feedburner itself checking if your feed was updated. So, it would be useful if you could make the browser pretend to be Feedburner, so you can check if it is working well as needed.

In Firefox you can use the User Agent Switcher extension also by Chris Pederick. In Chrome there is also an extension named User Agent Switcher. The problem is that it does not work. Well it does, but not in the way you expect.

This extension can only change the browser identification exposed to JavaScript. This means that the HTTP requests sent to the server will not use the user agent identification string that you need to be sent.

I suspect there is a feature request to have this implemented in the Chromium, but I did not find it. Until that feature is implemented, we have always to resort to Firefox which works well as necessary with the User Agent Switcher extension.

6. Buttons in the status bar

One good thing about Firefox and most other browsers is that you can have useful buttons below in the browser window in the status bar. That is were the Firebug and other useful extension buttons appear. When you want to use them, it is very easy to click a button to open Firebug and debug your JavaScript code.

Chrome practically eliminated the status bar. It is only used to show some temporary messages. If you want to open the developer tools to debug your JavaScript code or check the page HTML, you need either to find that function in the menus or memorize a non-trivial key sequence.

Eventually you will get used to this but it would be much more user-friendly if you could reach those extensions if they appeared in the status bar below or even at the browser window top.

7. Caching of posted pages in the browser history

Sometimes you need to go back in your browser history to a page that was presented after submitting a form using the POST method. However, you do not want to repeat the request that was sent to the server when that page was served.

In some cases, which I could not determine the exact circumstances, Chrome asks if I want to post the form again and does not show me the page in the browser history if I do not accept to post the form again. Firefox does not have this problem. It always shows me the page in the browser history, even if it was the result of a posted form.

8. The Flash extension crashes frequently

I do not develop sites in Flash. However, sometimes I need to access certain Google sites that provide useful information presented using Flash. That is the case of Google Analytics and Google Webmaster Tools.

Unfortunately, the Flash extension shipped with Chrome crashes frequently. That does not happen when using Firefox to access the exact same pages.

I would prefer that Google did not use Flash in such sites. Most of what they present did not really need to use Flash. I assume that changing such sites to not rely on Flash would take Google more development resources than they want to spend. In that case, it is necessary that Google fixes Flash extension that ships with Chrome.

9. HTML editing generates malformed HTML

Nowadays, most sites that publish HTML content submitted by the users, provide a rich text WYSIWYG editor interface. This is done setting the contenteditable HTML attribute, for instance of a div tag.

The problem is that HTML editors in Chrome are still quite buggy. Often if you copy and paste HTML in an editor results in malformed HTML.

I have seen HTML meta tags appearing out of nowhere in the middle of HTML pasted after being copied from other parts of the same HTML document being edited. I also often see bogus CSS styles named Apple-style-span appearing in pasted HTML, when such styles never existed before in the HTML being edited.

This is just a reminder that you need to have an HTML validator and filtering system on your server side scripts to clean-up any messy HTML submitted to the served after being edited on Chrome.

It is always a good idea to use such filters, as there is no guarantee that all browsers will always submit valid HTML. But the fact is that if you use Firefox you do not seem to get such malformed HTML.

10. No feedback to bug reports

I tried to report some of the problems above using Chrome built-in bug reporting system. You go on the menu Tools and then Report an Issue, and it shows a nice bug report page which may even include a screenshot of the current page you were browsing.

The problem is that I never got any feedback of my bug reports. So I do not know if the bugs were submitted and received properly, let alone having been seen and acted upon.

I do know if the time I spent trying to describe the bug reports was worth it. I suspect that it would probably be more efficient if I reported any bugs to the Chromium project directly.

Maybe I am getting this wrong, but sometimes I get the feeling that Google people does not see the act of providing feedback to the user community as an important thing.

This reminded me about the AppEngine issue of supporting PHP. It was the most requested feature for the AppEngine project. Google people decided to not support PHP in any sense, giving as justification the lack of resources. That is a bit odd thing to say for a company that makes many billion dollars in profits every year.

They also disallowed anybody from posting any further comments to that feature request. It seems that it is not relevant for Google what the PHP developers community can provide of feedback, despite it is probably the largest Web developers community.

Conclusions

This article is mostly my opinion and does not necessarily represent what most PHP and other Web developers think of how of Google should sort the priorities of development of Chrome and other Google products.

It is possible that I may have misunderstood certain aspects of how Chrome can work to address Web developer needs. Whether you agree or disagree with my opinions or you have other suggestions to solve the problems presented above, please feel free to post a comment telling what you think about it.

There is a new scam going on on Facebook right now, which involves you, your friends, and the yet to be released iPhone 5. Here is how it works. You see in your stream that one of your friends commented on an article titled “First Exposure: Apple iPhone 5.” Because your friend commented on it and because you really want to know about this iPhone 5, you click on the link as well.

You are then directed to a domain ending in .info (that should be a bad sign in itself), where you are asked to enter a captcha code to prove you are human. That should really be a second warning that this is not so legit…

After verifying that you aren’t a bot, a message is posted directly to your wall, notifying your friends that you have commented on the story. Your friends may or may not fall for it like you did. In the meanwhile, you are asked to fill out a survey to win some crappy prizes.

This technique is known as clickjacking. Although not as dangerous as a virus or spyware, it still acts on your behalf without your consent.

So really, watch out for what you click on. Sometimes a little good sense can go a long way.

[via]

Mozilla Firefox word mark. Guestimated clear s...

Image via Wikipedia

Mozilla hasn’t officially released a stable version of Firefox 4 yet, but some folks have spotted it on Mozilla’s Fastbull (FTP) servers.

You can immediately download Mozilla Firefox 4 for Windows, Mac and Linux (32/64 bit) here. That, for those who don’t wish to wait until Mozilla releases an Personal Package Archive (PPA).

So apparently, Mozilla Firefox 4 is almost here. The new version of open source browser packs some exciting new features, including tabs on top, new consolidated menu button, and App tabs.

You now switch between tabs just by typing a tab’s name or link into the Address (URL) bar, thus preventing duplicate tabs.

Similar to Windows 7, Firefox 4 allows you to pin the most frequent websites you visit, in App Tabs. So, when you exit your browser, and start it later on, these tabs are automatically loaded for you.

Other features include syncing even your settings and passwords, alongside bookmarks, history, open tabs and others across multiple devices (incl. mobile); organizing tabs into groups, and a new tab for managing your add-ons.

Firefox 4 uses the new JägerMonkey JavaScript engine, faster graphics and other performance improvements, faster start up and page load times.

Firefox users have high hopes from Firefox 4 final version as it has spent considerable time under development. So if you can’t wait till the official announcement, download Firefox 4 from the links given below.

FTP links for Firefox 4

Download Firefox 4 for Windows

Download Firefox 4 for Mac OS X

Download Firefox 4 for Linux (x86, x64)

In facebook stream you’ll see the time period at the bottom of the stream. For example: 4 minutes ago, 2 days ago, 3 weeks ago…. In our recent project we have to show similar time fashion for our application’s activity stream. So I write a function to retrieve the time duration.

From twitter xml response got the date of the tweet. From the date i converted it into seconds from this method

 date_default_timezone_set('GMT');
 $tz = new DateTimeZone('Asia/Colombo');
 $datetime = new DateTime($status->created_at); //get the tweet created date send to DateTime php function 
 $datetime->setTimezone($tz);
 $time_display = $datetime->format('D, M jS g:ia T');
 $d = strtotime($time_display); //convert date string to second integer

After getting the data I just pass the created value in my function. Here is the function:

/*
 * @method: getTimeDuration
 * @param: unix timestamp
 * @return: duration of minutes, hours, days, weeks, months, years
 */
 function getTimeDuration($unixTime) {
 $period    =   '';
 $secsago   =   time() - $unixTime;
 
 if ($secsago < 60){
 $period = $secsago == 1 ? 'about a second ago'     : 'about '.$secsago . ' seconds ago';
 }
 else if ($secsago < 3600) {
 $period    =   round($secsago/60);
 $period    =   $period == 1 ? 'about a minute ago' : 'about '.$period . ' minutes ago';
 }
 else if ($secsago < 86400) {
 $period    =   round($secsago/3600);
 $period    =   $period == 1 ? 'about an hour ago'   : 'about '.$period . ' hours ago';
 }
 else if ($secsago < 604800) {
 $period    =   round($secsago/86400);
 $period    =   $period == 1 ? 'about a day ago'    : 'about '.$period . ' days ago';
 }
 else if ($secsago < 2419200) {
 $period    =   round($secsago/604800);
 $period    =   $period == 1 ? 'about a week ago'   : 'about '.$period . ' weeks ago';
 }
 else if ($secsago < 29030400) {
 $period    =   round($secsago/2419200);
 $period    =   $period == 1 ? 'about a month ago'   : 'about '.$period . ' months ago';
 }
 else {
 $period    =   round($secsago/29030400);
 $period    =   $period == 1 ? 'about a year ago'    : 'about '.$period . ' years ago';
 }
 return $period;
 
 }

Then in the view files I showed the data as:

<div class="period">
 <?=getTimeDuration($created);?> ago
</div>
If you need similar task instead of writing a new function again, you can use the above function.

 

I’ve discovered a huge drawback to the Twitter messaging system: it does not store links. The Twitter site itself will identify URL’s in messages and convert them into clickable links for you automatically. But the magic ends at Twitter’s borders; anyone who wants to do the same on their site is on their own.

So I consulted the almighty Google. I found plenty of raw regex, javascript, and Twitter-focused discussions on the matter, but I found the offered solutions and tips lacking. I wanted to do this up right, transparently via PHP in the background. No JS required.

Finally, I found a small PHP script that accomplished what I needed. Here’s a renamed version—all code intact—that will find and convert any well-formed URL into a clickable <a> tag link.

function linkify( $text ) {
  $text = preg_replace( '/(?!<\S)(\w+:\/\/[^<>\s]+\w)(?!\S)/i', '<a href="$1" target="_blank">$1</a>', $text );
  $text = preg_replace( '/(?!<\S)#(\w+\w)(?!\S)/i', '<a href="http://twitter.com/search?q=#$1" target="_blank">#$1</a>', $text );
  $text = preg_replace( '/(?!<\S)@(\w+\w)(?!\S)/i', '@<a href="http://twitter.com/$1" target="_blank">$1</a>', $text );
  return $text;
}

Copy that into your code, then run your text containing unlinked URL’s through it. :

    <li><?php echo linkify($status->text) . '<br />' . $time_display; ?></li>

I am not a fan of social networking or so-called lifestreaming. I think it’s a BS excuse to fiddle on your computer more. Instead of telling everyone where you are and what you’re doing, go out and meet some friends for a drink.

However I did find a practical use for Twitter in a recent issue of php|architect (Twitter as a Development Tool by Sam McCallum). The article discussed using Twitter as an automated logger, where a program would make posts to a Twitter account based on system actions (i.e. log in/out, create accounts, etc.).

I decided to turn the idea around a bit and use Twitter as an activity log to chronicle my development work on a new project. Think SVN log comments without the repository. The site itself is currently a simple placeholder page, so Twitter updates make an easy way to keep a website fresh while building out the service that will eventually reside there. It also engages the users that wind up looking at the site, letting them know that it might be something of interest to them. That’s to say nothing of any SEO or attention-grabbing effects that may result from having a Twitter stream.

Given the rabidity surrounding said scoial networking silliness, I thought that finding a suitable plug ‘n play solution to this would be easy. Surprisingly (or perhaps unsurprisingly) many of the Twitter scripts I found were plain garbage. The following code was put together by sifting through what I found and putting the best working bits together. So if this sounds interesting, or if you were also frustrated with the plethora of crappy Twitter code, here’s how you can easily display your Twitter updates on any site using PHP.

First, grab this function…

function twitter_status($twitter_id, $hyperlinks = true) {
  $c = curl_init();
  curl_setopt($c, CURLOPT_URL, "http://twitter.com/statuses/user_timeline/$twitter_id.xml");
  curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
  curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 3);
  curl_setopt($c, CURLOPT_TIMEOUT, 5);
  $response = curl_exec($c);
  $responseInfo = curl_getinfo($c);
  curl_close($c);
  if (intval($responseInfo['http_code']) == 200) {
    if (class_exists('SimpleXMLElement')) {
      $xml = new SimpleXMLElement($response);
      return $xml;
    } else {
      return $response;
    }
  } else {
    return false;
  }
}

I’m not going to discuss the various cURL options here or how Twitter uses cURL, as its outside the scope of our discussion here. If you’re lost or curious, you can read up on the cURL library, cURL in PHP, and/or the Twitter API.

As its name implies, twitter_status() will connect to Twitter and grab the timeline for the Twitter account identified by the $twitter_id. The $twitter_id is a unique number assigned to every Twitter account. You can find yours by visiting your profile page and examining the RSS link at the bottom left of the page. The URL will look like this:

http://twitter.com/statuses/user_timeline/12345678.rss

That 8-digit number at the end is your ID. Grab it and pass it as the lone argument to twitter_status(). Note that, as long as your Twitter profile is public, you do not need to pass any credentials to retrieve a user timeline. The API makes this information available to anyone, anywhere. There are more options that can be accessed through the user_timeline() function, if you’re curious.

The next step is to actually use the returned data, which comes in one of two forms: a SimpleXML object, or a raw XML document. SimpleXML is preferred because it’s a PHP object, and allows you access to all the usual object manipulation. Very easy. SimpleXML was added to PHP starting with version 5. The PHP manual has all the necessary details on SimpleXML.

The following code example assumes you’re using SimpleXML. Here I am taking the first five results and putting them in an HTML list. I’ll include a link to view the profile, as well as an error message in case Twitter is suffering from one of its famous fail-whale spasms.

 

<ul>
<?php
if ($twitter_xml = twitter_status('12345678')) {
  foreach ($twitter_xml->status as $key => $status) {
?>
  <li><?php echo $status->text; ?></li>
<?php
    ++$i;
    if ($i == 5) break;
  }
?>
  <li><a href="http://twitter.com/YOUR_PROFILE_HERE">more...</a></li>
<?php
} else {
  echo 'Sorry, Twitter seems to be unavailable at the moment...again...';
}
?>
</ul>

With all the fancy cURL-based API’s out there these days (Facebook and Twitter immediately come to mind), using cURL to directly access and manipulate data is becoming quite common. However like all programming, there’s always the chance for an error to occur, and thus these calls must be immediately followed by error checks to ensure everything went as planned.

Most decent API’s will return their own custom errors when an internal problem occurs, but that does not account for issues dealing directly with the connection. So before your application goes looking for API-based errors, they should first check the returned HTTP status code to ensure the connection itself went well.

For example, Twitter-specific error messages are always paired with a “400 Bad Request” status. The message is of course helpful, but it’s far easier (as you’ll see) to find the status code from the response headers and then code for the exceptions as necessary, using the error text for logging and future debugging.

Anyway, the HTTP status code, also called the “response code,” is a number that corresponds with the result of an HTTP request. Your browser gets these codes every time you access a webpage, and cURL calls are no different. The following codes are the most common (excerpted from the Wikipedia entry on the subject)…

  • 200 OK
    Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.
  • 301 Moved Permanently
    This and all future requests should be directed to the given URI.
  • 400 Bad Request
    The request contains bad syntax or cannot be fulfilled.
  • 401 Unauthorized
    Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource.
  • 403 Forbidden
    The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
  • 404 Not Found
    The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
  • 500 Internal Server Error
    A generic error message, given when no more specific message is suitable.

So now that we know what we’re looking for, how do we go about actually getting them? Fortunately, PHP’s cURL support makes performing these checks pretty easy, they just don’t make the process plain. We need a function called curl_getinfo(). It returns an array full of useful information, but we only need to know the status number. Fortunately, we can set the arguments so that we only get this number back, like so…

// must set $url first. Duh...
$http = curl_init($url);
// do your curl thing here
$result = curl_exec($http);
$http_status = curl_getinfo($http, CURLINFO_HTTP_CODE);
echo $http_status;

curl_getinfo() returns data for the last curl request, so you must execute the cURL call first, then call curl_getinfo(). The key is the second argument; the predefined constant CURLINFO_HTTP_CODE tells the function to forego all the extra data, and just return the HTTP code as a string.

Echoing out the variable $http_status gets us the status code number, typically one of those outlined above.