Author: hasin

My slide in SQABD Lightning Talks – II

last night it was a nice event arranged by SQABD, the second part of lightning talks. the event was a huge success, almost 180 developers came from 40-50 companies. there were 12 speakers and each of them presented a 5 minutes speech and there was also a short “Q/A” session after each of these speeches.

luckily i’d got a chance to present my slide on “motivated team” there. its a micro session and mainly like the tv show “5 minutes to fame“. so i tried to keep it fun oriented, to keep the audience focused.

you can download my slide from here

motivated-team-and-extracting-most-out-of-it-v2

thanks everyone who were present there, the speakers, volunteers, messengers, SQABD group and of course sajjad bhai for being such an outstanding host.

hot in hottDhaka

I’ve recently came by hottDhaka developed by hott media. it’s basically a community driven social networking application focusing currently on restaurants. if you want to check a new hott dish or sizzlings, or a new restaurant to spend your evening, you should first check hottDhaka for a review. hottDhaka features excellent reviews, sample copy of menu available from each of these restaurants and provides you a nice interface to share your opinions. may be making friends with similar taste? why not!

overall i really like this site for their slick look-n-feel, soothing colors and nice ui. i must say its a nice addition to bangla community. cool! they are still in beta and i hope the final release is coming with some real cool features.

Bug in twitter prevents you from updating your status

I’ve found it last night while trying to update my status in twitter. these days I am too much twittering and I was redirected to an url while exploring twitter. while trying to update my status, it failed repeatedly and then i’ve found that bug – lol

the url that reproduces the bug is http://explore.twitter.com/home – which looks similar to your actual twitter home (http://twitter.com/home) and it also shows you a twit box to update your status. but you cannot update your status from http://explore.twitter.com/home

check what was returned after the ajax request (u have firebug, right? do it yourself)

403 Forbidden: The server understood the request, but is refusing to fulfill it. If you are posting from an API tool please ensure that the HTTP_REFERER header is not set.

try before they fix it 🙂 interesting usability bug indeed.

One Month in I2We

One month passed in I2we, a Berkeley based social networking firm. Here is my team. and I am proud to be a part of it

I2we team in berkeley
I2we team in berkeley

Standing in back, Sean, Karel and Huey
Jessica and Bemi are in sitting in front

🙂

What a day!!

……….

……..zzzzzzzzzzzzz…….

whoops!! it is 11 AM

trying to restore apache2 when it was corrupted after a system update last evening

grrrrrrrrrrr – what the hell!! – mod-php5 is not working at all!

grrrrrrrrrrrrrrrrr

…zzzzzzzzzzzzzzz….

…whoops its 5PM

raju came with his new laptop, failed to setup his EST-610U EDGE card on Ubuntu, Modprobe was not working with even a vendor and a product id

its 6PM – went to north tower and had some mango juice

around 6:30 PM, ahsan and anupom came and gave me a copy of their new book, CakePHP

7:30 manzil, junal, ahsan and anupom came and we enjoyed the movie “Death Sentence” together

10 PM, I went to Omi Azad‘s apartment with Ayesha and Afif – had two scoops of icecream…. yummy!!

11PM – came to home and start setting up xampp – found that xampp is set to use it’s own mysql socket instead of the system default. so edited the bundled php.ini and set the following line to use my previous mysql installation properly
mysql.default_socket = /var/run/mysqld/mysqld.sock

sometime later 11PM, xampp is now working, setup postgreSQL addon and mod_python. the python addon was not working becoz of the version conflict of mod_python.so. So I’ve installed “libapache2-mod-python” and copied the /usr/lib/apache2/modules/mod_python.so to /opt/lampp/modules/ folder – python is now working

around 12, Ayesha and I had our dinner.

1AM to 3AM, playing with GPG and setup a new key pair. My new public key is 2FD0F9E9

3:08AM – wrote this blog post and preparing to sleep. must be a busy day tomorrow!

whoops!!

Update, June 22 – July5

1. left trippert labs, many of you already know that.
2. started with i2we inc as a sr. software engineer from july 1
3. developed a draggable and localized virtual keyboard for a inhouse project – you can see the demo at http://gopsop.com/vk.html – right now it is completely on-the-fly generated and based on jquery dom manipulation. please note that you cannot use it in any of your applications before october 1. Thanks also goes to Tapos for fixing an IE specific bug.
4. integrate facebook app development platform with orchid – now developing facebook applications with orchid is easy and charm!
5. planning for the vacation from july 15th to 31st

This is my last week in TrippertLabs

I am leaving TrippertLabs by the end of this month. It is kinda painful leaving something which I have developed actively as a part of management entities. And TrippertLabs become a big hit here in Bangladesh for PHP devs. In the past one year in TL I have managed to set it up with 4 out of 5 ZCEs in Bangladesh and in total of 8 awesomely skilled PHP devs, 4 outstanding game developers and animators, 3 QAs and one administrator here in this local facility . TrippertLabs BD became a highly equipped development center for developing high traffic and game based social network applications (for Facebook, MySpace, Bebo and OpenSocial) along with it’s other wings in Indonesia, Pakistan, India, Germany and USA.

So this is end of a one year journey with TrippertLabs here in Bangladesh.

I am planning for a one month vacation and then I will start looking for a job again.

Look Ma, everyone's computing out there for me!

SETI@Home is probably the greatest example of low cost distributed computing which become a big hit. After their tremendous success, many others over there started following the same strategy and used the power of distributed computing for other purposes like cancer research. In this article I will show you how we can use the same power at almost zero cost, and specially for your web applications.

As I am currently working on building an open source version of FriendFeed (not targetted to be an alternative, coz those people at FriendFeed done their job really well) and scaling such a huge load effectively at low cost, so I will mainly talk more about FriendFeed through out this blog post and use it as an example for my proposal.

If you consider FriendFeed as a repository of feed URLs, a lot of peoples and how they are related to each other, you can assume how big it is or it could be in near future. And scaling such a service would cost numbers of sleepless nights of many developers out there. So in basic, lets focus where actually the problem is and how we can introduce distributed computing.

Beside optimizing database to serve huge sets of data, one of the main problems of such a service has to parse millions of feeds in a regular interval. If we want to bear all the loads on your server, fine, if you can afford. But what about some low cost solutions. Lets consider a simple scenario, if your application has one million of users and each of them browse your application for 10 minutes a day, you really have 10 millions of computational power just wasting over there, in lanes and by lanes of internet – heh heh. So lets make use of such an incredible CPU power. All you have to do let the visitors machine do some calculations for you and free your server from gigantic load.

When the users of your application and relation among them are stored in your database, you can easily find out the first degree and second degree friends of a specific user. If you don’t know what does that mean, its simple, If A is a friend of B and C is a friend of A, then A is B’s first degree friend and C is B’s second degree friend. For a huge social network, it may look like the following one when you visualize the relationship


image courtesy: http://prblog.typepad.com

Now what we want to do is when B is visiting our application we want to parse most of his/her second degree friends in client using his browser. So while generating the page for B, we will supply him a bunch of feed URLs, a hash of their last known update time or hash of the latest item of each of these corresponding feeds, and a javascript based parser (for example Google’s ajax Feed API would do fine) script. Now, while B is browsing our application we will parse those feeds of his second degree friends using javascript without bothering him for a single second and post back the parsed contents (after checking against the hash for really updated contents) to a server side script which will then blindly (not totally, after some validation or authentication for sure) insert those result to our database. Now when A comes and visit his page (A is C’s first degree friend), he will get all the latest result from C’s feeds as B has already done the parsing job for us and we have those latest result from C’s feeds stored in our Database.

There are definitely more challenge than it is explained here, like what if a person is second degree friend of a multiple user. In such cases as we supplied last update time of these feeds while generating a page, we can calculate from server side which feeds we really want to parse.

And moreover, we can add more check for our javascript parser than just blindly parse those feeds. We can download a chunk of RSS or Atom feeds (using a proxy script developped using curl range options) and read just up to the latest update time and extract the time using simple reg ex or string functions instead of downloading full feed data. Now if we know that a script is not uptodate just by downloading 1-2 killobyte of data instead of downloading full feed, parsing the xml data, it would save us more computing resources for performing other jobs.

But of course, you cannot live completely on your client’s machine for parsing all your data. You must have several cron’d scripts to parse left overs or other feeds at server side. But what I am saying is that with little help of javascript you can make use of such a tremendous distributed computing power in your application, all at almost no cost.

I will come with example code once I am done developing my open source clone of FriendFeed and then I am sure, you will find it was worth writing a blog post about.

Distributed Computing
image courtesy: http://www.naccq.ac.nz/bacit/0203/2004Caukill_OffPeakGrid.htm

Have a nice RnDing time. 🙂

Building services like FriendFeed using PHP – Part2

Following the first installment in this series, here is the second part. In this part I will focus mainly on Bookmarking and News services supported by FriendFeed . Here we go

Supported bookmarking services by FriendFeed
1. Del.icio.us
2. Furl
3. Google shared stuffs
4. Mag.nolia
5. Stumbleupon

Except google shared stufss, all of the rests require just an username to generate the access point for retrieving user’s bookmarked items. And for the google shared stuffs, it requires the fully functional url of the feed available from your google bookmark service.

Access points

Del.icio.us

AP: http://feeds.delicious.com/rss/<user name>
example: http://feeds.delicious.com/rss/anduh


Furl

AP: http://rss.furl.net/member/<user name>.rss;
example: http://rss.furl.net/member/pigge.rss


Google Shared Stuffs
you can find your google shared stuff url from http://www.google.com/s2/sharing/stuff

example: http://www.google.com/s2/sharing/stuff?user=110703083499686157981&output=rss


Ma.gnolia

AP: http://ma.gnolia.com/rss/lite/people/<user name>.rss;
example: http://ma.gnolia.com/rss/lite/people/gerryquach


Stumbleupon
Whoops, Double Whoops, Tripple Whoops. It took quite a time for me to find the feed url. I dont know why it is kept so “SECRET” – LOL

AP: http://www.stumbleupon.com/syndicate.php?stumbler=<user name>.rss;
example: http://www.stumbleupon.com/syndicate.php?stumbler=jd001


Supported news services by FriendFeed
1. Digg
2. Google Reader
3. Mixx
4. Reddit

Here are the access points


Digg

AP: http://digg.com/users/<user name>/history.rss;
example: http://digg.com/users/msaleem/history.rss


Google Reader

You can find your shared item’s feed url here http://www.google.com/reader/view/user/-/state/com.google/broadcast


Mixx

Unfortunately during the time of writing this article, Mixx was napping – here is the screenshot. Once they are awake, I will update this section 🙂


Reddit

AP: http://www.reddit.com/user/<user name>/.rss;
example: http://www.reddit.com/user/jack_alexander/.rss

In next installation I will focus on scaling such a huge load successfully. Hope that will be interesting to many of you. Following installation will focus again on the access points.