Tag: Javascript

How to enqueue javascript files properly in the WordPress admin panel

You’d probably have answered the question already, won’t you? We all know that the correct answer is to do that we have to write a function that hooks the “admin_enqueue_scripts” and inside that function we will have to load the script via wp_enqueue_scripts() function. That’s the correct answer, but when it comes to the best practices, it fails. Because when you’re loading a javascript file blindly in all pages, it can be messy and it also may conflict with other scripts which are already loaded in that page. Besides that, why load an extra script everywhere when you actually need it in a specific page, or two perhaps.

The correct, or “precise” way to enqueue a javascript file in the WordPress admin panel is to check first which page you’re on. And when you’re on that specific page where this javascript file is actually required, load it. So how you’re gonna check which page you’re on in the admin panel? The enqueue hook passes a parameter that tells you about this. Let’s have a look at the following example

[sourcecode language=”php”]
add_action("admin_enqueue_scripts","admin_scripts_loader");

function admin_scripts_loader(){
wp_enqueue_script("myscript","path/to/my/script.js");
}
[/sourcecode]

There is nothing wrong with the enqueueing system above. However, the only problem is that it will load your script in every admin page, you need it or not. So let’s write a better version of that admin_scripts_loader function.

[sourcecode language=”php”]
function admin_scripts_loader($hook){
if(in_array($hook,array("post-new.php","post.php","edit.php"))) {
//specifically load this javascript in post editor pages
wp_enqueue_script("myscript", "path/to/my/script.js");
}
}
[/sourcecode]

Now the code is more accurately loading your javascript file only in the post editor pages. This $hook variable gives you the name of the PHP file that you’re on. So it’s quite easy for you to figure out where you should load it.

Hope you enjoyed it.

Cropping any part of any website – thats fun!

after seeing the excellent jCrop this evening, i was thinking how to use it fro cropping any part of any website. Using a little bit CSS and Iframe – you can simulate the cropping of any webpage and thats what I did this evening

check out http://sandbox.ofhas.in/pagecrop/ – type any url in the “Url” box, load it and then select any part of it (that part is done using jCrop) – and then click “crop selection” – tada!

you can use this technology to add any particular part of any website to your website. it is done using javascript (jQuery and jCrop)- no PHP at all 🙂

check it out – you will definitely enjoy it 🙂

picture-21

Look Ma, everyone's computing out there for me!

SETI@Home is probably the greatest example of low cost distributed computing which become a big hit. After their tremendous success, many others over there started following the same strategy and used the power of distributed computing for other purposes like cancer research. In this article I will show you how we can use the same power at almost zero cost, and specially for your web applications.

As I am currently working on building an open source version of FriendFeed (not targetted to be an alternative, coz those people at FriendFeed done their job really well) and scaling such a huge load effectively at low cost, so I will mainly talk more about FriendFeed through out this blog post and use it as an example for my proposal.

If you consider FriendFeed as a repository of feed URLs, a lot of peoples and how they are related to each other, you can assume how big it is or it could be in near future. And scaling such a service would cost numbers of sleepless nights of many developers out there. So in basic, lets focus where actually the problem is and how we can introduce distributed computing.

Beside optimizing database to serve huge sets of data, one of the main problems of such a service has to parse millions of feeds in a regular interval. If we want to bear all the loads on your server, fine, if you can afford. But what about some low cost solutions. Lets consider a simple scenario, if your application has one million of users and each of them browse your application for 10 minutes a day, you really have 10 millions of computational power just wasting over there, in lanes and by lanes of internet – heh heh. So lets make use of such an incredible CPU power. All you have to do let the visitors machine do some calculations for you and free your server from gigantic load.

When the users of your application and relation among them are stored in your database, you can easily find out the first degree and second degree friends of a specific user. If you don’t know what does that mean, its simple, If A is a friend of B and C is a friend of A, then A is B’s first degree friend and C is B’s second degree friend. For a huge social network, it may look like the following one when you visualize the relationship


image courtesy: http://prblog.typepad.com

Now what we want to do is when B is visiting our application we want to parse most of his/her second degree friends in client using his browser. So while generating the page for B, we will supply him a bunch of feed URLs, a hash of their last known update time or hash of the latest item of each of these corresponding feeds, and a javascript based parser (for example Google’s ajax Feed API would do fine) script. Now, while B is browsing our application we will parse those feeds of his second degree friends using javascript without bothering him for a single second and post back the parsed contents (after checking against the hash for really updated contents) to a server side script which will then blindly (not totally, after some validation or authentication for sure) insert those result to our database. Now when A comes and visit his page (A is C’s first degree friend), he will get all the latest result from C’s feeds as B has already done the parsing job for us and we have those latest result from C’s feeds stored in our Database.

There are definitely more challenge than it is explained here, like what if a person is second degree friend of a multiple user. In such cases as we supplied last update time of these feeds while generating a page, we can calculate from server side which feeds we really want to parse.

And moreover, we can add more check for our javascript parser than just blindly parse those feeds. We can download a chunk of RSS or Atom feeds (using a proxy script developped using curl range options) and read just up to the latest update time and extract the time using simple reg ex or string functions instead of downloading full feed data. Now if we know that a script is not uptodate just by downloading 1-2 killobyte of data instead of downloading full feed, parsing the xml data, it would save us more computing resources for performing other jobs.

But of course, you cannot live completely on your client’s machine for parsing all your data. You must have several cron’d scripts to parse left overs or other feeds at server side. But what I am saying is that with little help of javascript you can make use of such a tremendous distributed computing power in your application, all at almost no cost.

I will come with example code once I am done developing my open source clone of FriendFeed and then I am sure, you will find it was worth writing a blog post about.

Distributed Computing
image courtesy: http://www.naccq.ac.nz/bacit/0203/2004Caukill_OffPeakGrid.htm

Have a nice RnDing time. 🙂