Search This Blog

Thursday, December 23, 2010

Implementing Cron Jobs in Drupal

Every time you run crontab -e, you may need to be logged in as the administrator of your web server.

Cron can be setup to run a command more or less than once an hour. We need to add the following line to our crontab file to run cron.php at the start of every hour.

0 * * * * /usr/bin/wget -O - -q -t 1 http://www.yoursite.com/cron.php

The 5 characters separated by spaces controls when cron will run the command on the right. Lets break it down with some ascii art:

* * * * * command
| | | | |
| | | | |_day of week (0-6) where Sunday is 0
| | | |
| | | |_month (1-12)
| | |
| | |_day of month (1-31)
| |
| |_hour (0-23)
|
|_minute (0-59)

So say you wanted to run cron.php once every day. The following line would run your drupal cron job every night at midnight:

0 0 * * * /usr/bin/wget -O - -q -t 1 http://www.yoursite.com/cron.php

The next line would run the drupal cron job every day at 2:30pm, or 14:30 military time:

30 14 * * * /usr/bin/wget -O - -q -t 1 http://www.yoursite.com/cron.php

Lets run cron every Sunday and Wednesday at 3am:

0 3 * * 0,3 /usr/bin/wget -O - -q -t 1 http://www.yoursite.com/cron.php

This allows for lots of flexibility in when you'd like to run your drupal cron job.

Monday, April 12, 2010

Drupalimage plugin in TinyMCE

In order to get drupalimage plugin in TinyMCE WYSIWYG editor do the following steps

Edit the plugin_reg.php file ( \modules\tinymce\tinymce\jscripts\tiny_mce\plugins). Add
these lines (anywhere above the return statement):

$plugins['drupalimage'] = array();
$plugins['drupalimage']['theme_advanced_buttons1'] = array(’drupalimage’);
$plugins['drupalimage']['extended_valid_elements'] = array(’img[class|src|border=0|alt|title|width|height|align|name]‘);

Difference between Drupal 5 and 6 versions

Different versions of Drupal do have new APIs, new functionality, some things made obsolete. Design and code improvement are always considered more important than backward compatibility in Drupal.Drupal is never rewritten from scratch with every version. The changes are usually located to a couple of areas of work, which it is possible to follow.

some of the new features in drupal 6 are

* Quick and easy setup
* Drag-and-drop administration
* All languages spoken here!
* Actions and triggers
* Sign in with OpenID
* Update status module
* Optimized code
* A new menu system
* Scalability options
* Scripting from the command line etc

For more details see http://drupal.org/drupal-6.0

Differences between Joomla and Drupal

Joomla and Drupal are two famous Content Management System.

Main differences are given below

1. Drupal use smarty template where as on the other hand Joomla does not

2. Joomla uses plugin and modules whereas Drupl has module only

3. In drupal php code can be written directly whereas in Joomla you need to install plugin for php support

4. Flexibility & Power: Drupal looks significantly more powerful — much more flexible

5. Joomla is far easier to get up and running. Even with all the free videos, blogs, etc, Drupal is still a lot more challenging

6. Drupal’s taxonomy system is excellent.

7. Drupal’s tools are very, very good

8.Use Joomla if you want to get nice looking site up quickly and can deal with a slower system, rigid content categorization and limited design/configuration options.

9. Use Drupal if you want high performance, scalability, good content management and significant design flexibility. But, be prepared to spend a lot of time/money to get the site to look professional.

Let us spend some time and know the Heroes of our Neighborhood...

2009 Ramon Magsaysay Awardees Announced !!!!!!!!!!!

This award is popularly known as “Nobel prize ” of Asia .

The Ramon Magsaysay Award was created in 1957, the year the Philippines lost in a plane crash, a President who was well-loved for his simplicity and humility, his passion for justice, particularly for the poor, and his advancement of human dignity.

Awardees of 2009 are:

Krisana Kraisintu, from Thailand. She is being recognized for “her placing pharmaceutical rigor at the service of patients, through her untiring and fearless dedication to producing much-needed generic drugs in Thailand and elsewhere in the developing world.“

Deep Joshi, from India. He is being recognized for “his vision and leadership in bringing professionalism to the NGO movement in India, by effectively combining ‘head’ and ‘heart’ in the transformative development of rural communities.“

Yu Xiaogang, from China. He is being recognized for “his fusing the knowledge and tools of social science with a deep sense of social justice, in assisting dam-affected communities in China to shape the development projects that impact their natural environment and their lives.“

Antonio Oposa, Jr., from the Philippines. He is being recognized for “his pathbreaking and passionate crusade to engage Filipinos in acts of enlightened citizenship that maximize the power of law to protect and nurture the environment for themselves, their children, and generations still to come.“

Ma Jun, from China. He is being recognized for “his harnessing the technology and power of information to address China’s water crisis, and mobilizing pragmatic, multisectoral, and collaborative efforts to ensure sustainable benefits for China’s environment and society.”

Ka Hsaw Wa, from Burma. He is being recognized for “his dauntlessly pursuing non violent yet effective channels of redress, exposure, and education for the defense of human rights, the environment, and democracy in Burma.”

To know more about our neighborhood Heroes (Magsaysay awards) click this link:

http://www.rmaf.org.ph/index.php

How to Develop / Create a simple Application for iPhone ?

This is indented for all those who are looking for examples and tutorials for iphone application development. The examples include basic requirements, development environments, UI development and making your first app.

If you’re like me, you’ll find the best way to learn new technologies or techniques is not by reading the “Official” product documentation, but by studying the works of others. I’ve learned most of what I’ve learned about iPhone development not by reading Apple’s developer documentation, but by studying what other developers have done with their apps. To that end, I’ve compiled a list of the most useful iPhone development facts that I’ve found after searching in the web for hours. I’m sure you’ll find yourself writing much better code after you look at some of the things explained here

This is a quick guide to iPhone software development, i.e. a guide for competent developers who haven’t written code for the iPhone platform before, and just want to get started right now.

If you’re inexperienced in application development, this isn’t for you; try a good book instead. If, however, you’re confident of your ability to read documentation, do your research, and apply your existing skills to a new language, IDE, SDK and platform without the need for a preface, introduction and lecture on guiding principles, you’ve come to the right place.

For easy and speedy assimilation, this guide consists of simple lists of facts and tips as bullet points, split into sensible sections. Have a quick read through them, and then start coding.

Hardware

* You have to use a Mac to develop iPhone apps. Specifically, you need an Intel Mac - iPhone development isn’t supported on PowerPC Macs. There are hacks to make it work (at time of writing), but if you’re serious about it you should probably just get an Intel Mac. Doesn’t have to be brand new; all new Macs have been Intel-based for years now.
* You’ll need the iPhone SDK, which is free from Apple when you sign up as an iPhone Developer.
* It’s obviously handy if you have an iPhone or iPod touch too, though you can certainly get started without one (the SDK includes a Simulator you can run your code in).
* Xcode (your IDE) has an Eclipse-like all-in-one UI by default (though you can split it all out into separate windows if you like), but its documentation browser is a separate window. Both those primary windows work best when large. You’ll also need to at least occasionally be using Interface Builder for your GUIs, and you’ll be running the iPhone Simulator often. Given all these demands on your screen-space, you may find it helpful to have a big monitor, or even two side-by-sides. Your Mac will automatically extend your desktop onto multiple monitors as soon as you connect another one.

Development Environment

* You have to use Xcode as your IDE. You can use an external text-editor of your choice alongside it if you like.
* Xcode has built-in help and API listings. Look in the Help menu, and choose Documentation.
* It also has a Research Assistant feature which shows info about whatever class-name or type your text insertion-point is in or near (including links to Apple sample code which uses that method/class/etc). That’s in the Help menu too.
* Option/Alt-doubleclick on a class or method to go to the relevant documentation.
* Apple/Command-doubleclick on a class or method to go to the place where it’s defined.
* Press the Esc key to autocomplete. You’ll get a popup list of options if necessary.
* There’s a graphical debugger (a UI on gdb), and a tool called Instruments which lets you inspect just about everything from file access to memory usage to performance.
* Xcode provides native GUI support for Subversion, CVS and Perforce. You’re also free to use any SCM system you like, via the command line. Xcode won’t cause any damage to your SCM system’s special/hidden files.

Programming language

* You have to use the Objective-C language.
* It’s just C, with some extra stuff (literally; it’s C with a fancy preprocessor and runtime).
* Objective-C syntax looks weird, but it’s only visually different. receiver.doThing(foo, bar, baz) in Java would be something like [receiver doThingWithFoo:foo andBar:bar andBaz:baz] in ObjC.
* Objective-C just splits the method-name into pieces and intersperses those pieces with the actual arguments, so it reads like a sentence. It’s verbose, but it’s more explanatory. Accept it and move on; you have auto-complete to help with the typing.
* Because it’s based on C, object types aren’t automatic pointers like they are in Java; you have to explicitly create them as pointers. This is usually just as simple as putting an asterisk between the class name and the variable name when you declare an object.
* As you might expect, that breezy glossing-over hides some complexity, and causes me some pain because you really should learn how it actually works. There’s plenty of tutorial material online about pointer syntax in C. Try reading it sometime.
* Primitives like int etc behave as you’d expect, because it’s just C.
* If you want to use dot-notation to access instance variables, you can. You just need to declare a “property” for that instance variable first. Read the docs on properties.
* Accessing properties is actually just calling appropriate automatically-generated getter and setter methods on the object (literally). You can override those methods if you want to; it’ll work as you expect.
* A few specific things you’ll want to know about:
o You can call the superclass’s implementation via super.
o The equivalent to Java’s this is pretty much self.
o You can access the selector (method name) for the current method with _cmd.
o Those last two are implicit parameters to every method; it’s not magic.
* Strings. About 99.99999% of the time you’ll want NSString objects instead of raw C strings. Objective-C has a nice feature whereby you just put an @ symbol before a string to make it an NSString, like @”this”. Just do that by default, every time; it’s pretty much always what the APIs will expect.

Application Frameworks

* There are two main frameworks you’ll be using: UIKit and Foundation.
* These aren’t “Cocoa” (that’s the Mac desktop development equivalent, consisting of AppKit and Foundation), but they’re extremely closely related and in many cases are near-duplicates. If you’re looking up documentation online, be sure you’re looking at docs for the iPhone version of Foundation, and not the Cocoa (Mac) version.
* Classes to do with actual visual controls etc (including windows, views, buttons, tables, etc) begin with “UI”, and have names like “UIButton” or “UIWindow”.
* A lot of the non-visual stuff is in classes beginning with “NS”, including object types like NSString, NSNumber, NSArray, NSDictionary (a hashmap class) and so forth. There are also some other frameworks, including a fair few C-based ones.
* There’s a convention whereby data-structures are immutable (not editable) by default. In almost every case, if you want an editable version of that same type then it has a mutable subclass, whose name will be of the form “NSMutableArray”, “NSMutableDictionary” and so forth. You can often call mutableCopy on the immutable version to get a mutable one with the same contents. Pay attention to memory-management rules (mentioned below) when you do that.

Saving and Loading

* For saving preferences or settings, look at NSUserDefaults.
* For saving files, look at NSDictionary’s ability to read and write XML property-lists (for saving basic data-structures containing standard object types), or at NSData (for more complex and/or custom objects).

Memory management

* You have to manage your memory manually. There’s no garbage collector.
* There are simple rules and conventions; follow them at all times, and without exception.
* This is one area where you just have to read the docs. There’s also a good tutorial here.
* Don’t make it harder than it actually is. You can get the hang of this in an afternoon.

GUI programming

* There’s a really nice GUI builder called Interface Builder, included with Xcode. When people mention “IB”, it’s Interface Builder they mean.
* You can also make your GUI programmatically if you want, or mix both techniques. Neither method is “preferred”; IB is a tool that you can use if it helps you.
* The files which Interface Builder generates are called “nibs”. The actual file-extension for nibs is often “xib”, but can be “nib” too. “xib” is used for iPhone stuff; it’s a newer format than nib. Interface Builder will just do the right thing.
* Code can be connected to IB-created GUI via “outlets”: instance-variables with IBOutlet before their type, on classes which you connect in IB. Read the docs on it.
* IB-created GUI can be connected to code via “actions”: methods which take one parameter of type id, and return the type IBAction, on classes which you connect in IB. Read the docs on it.
* To make the connections you need to have an instance of the appropriate class in the nib document. If it doesn’t exist, drag a yellow Object cube from the Library, then use the Inspector to change its class to the correct one. That’s enough to create an actual instance of the class at runtime.
* You make connections by right-click-dragging in Interface Builder. You can also just right-click to see and modify a list of all connections for an object.
* UI in a nib file isn’t like a specification or template, and it isn’t code-generation either; it’s actual instances of objects, serialized into a file for later reloading. These are real, actual objects.
* There’s zero magic in Interface Builder. The first/main nib is loaded by the UIApplication class, and its name is read from a property-list file in your project; you can see it and take a look. That’s how the main nib gets loaded; it’s just a handy default behaviour. You can load as many other nibs as you like.

Debugging

* Xcode has a GUI for gdb, the well-known C (etc) debugger.
* For cowboy debugging, use the NSLog() function; it’s a bit like System.out.println in Java. It takes an NSString, so it would look like this: NSLog(@”Hello World!”);
* You can put printf-style formatting characters into it, and add the relevant variables after the string just like you’d expect. The syntax is identical to printf: NSLog(@”You are %d years old”, age);
* You can also output arbitrary object types using the %@ formatting character. Many hierarchical datastructures like NSArray and NSDictionary will be pretty-printed automatically.
* The equivalent of Java’s String toString() method is -(NSString *)description. Many of UIKit and Foundation’s classes provide helpful default implementations of this. The root class’ implementation will simply print the object’s class-name and address in memory.
* The console output is within Xcode itself, either in a separate window or in a split pane (depending on how you’ve got Xcode set up).

Using the iPhone Simulator

* The Simulator is included with the iPhone SDK.
* It is NOT cycle-accurate. Not even remotely. Your animations will be much faster in the Simulator than on the device, so don’t ever judge performance from the Simulator.
* It doesn’t have all the apps/facilities of an actual iPhone. For example, it doesn’t simulate things like the accelerometer.
* Don’t try to submit apps to the App Store that you haven’t tested on actual devices.
* Your Mac’s keyboard works for text-entry in the simulator. Your mouse works for clicking and dragging. You can simulate two-finger input by holding down the Option/Alt key.
* The Simulator has menu-commands to rotate the device to landscape and back to portrait, and also to simulate a memory-warning so you can respond to that situation (no garbage collector, remember?)

Deploying to a device

* You have to have a certificate in order to sign your code to make it work on a device.
* This is true for development as well as for actually deploying on the App Store.
* You have to pay a yearly fee ($99) to get the ability to create such a certificate.

Getting onto the App Store

* If you haven’t yet signed up as a paid iPhone Developer to get a certificate, start now.
* You can’t just do it at the last minute, once your app is ready - it takes time.
* You’ll have to provide various pieces of information, and enter into a distribution contract with Apple. Apple will take 30% of your profits, and will meet all distribution costs.
* Give yourself at least several days to do all this.
* Apple has to approve your app before it goes onto the App Store. This can take any amount of time; right now it’s somewhere around 1-2 weeks. You can’t speed this process up; you just need to wait. This is an inherently and unavoidably flexible part of your deadline, so don’t over-promise.
* This approval process happens even when you submit an update to an existing app, every time.
* Your app will be run by an actual human.
* If your app is rejected, you’ll almost always be told why. You can make suitable changes and re-submit it.

Getting more help

When you need help, do what you’d do on your own platform:

* Read the official docs. There are Getting Started guides as well as raw API documentation; it’s all there when you log into the iPhone Developer site.
* Look at the sample code provided by Apple.
* Search the official forums, and post a question if necessary.
* Also try third-party forums.
* Use Google; someone might have blogged about this.
* Buy a book; there are a few out there now. Use Amazon reviews to see what’s worth buying.

And apply the usual common-sense principles when seeking help:

* Don’t ask your questions here; this is a blog, not a support forum. Likewise, don’t email me your questions.
* Do your research before asking anything. Be prepared to answer the question “what have you tried?“
* Be precise. If you’re coming from another platform, say so and mention which one - you might be using terminology from your platform which is slightly different in the iPhone development world. If you tell us where you’re from, we’ll figure it out.
* Be concise (but not at the expense of precision).
* Be polite and respectful; it costs nothing, and you’ll get more and better answers.

Creating First iPhone Application - MyApp

The process for creating an iPhone app is similar to that for creating a Mac OS X application. Both use the same tools and many of the same basic libraries. Despite the similarities, there are also significant differences. An iPhone is mobile Platform not a desktop PC; it has a different purpose and requires a very different design approach. That approach needs to take advantage of the strengths of iPhone OS and forego features that might be irrelevant or impractical in a mobile environment. The smaller size of the iPhone and iPod touch screens also means that your application’s user interface should be well organized and always focused on the information the user needs most.

iPhone Operating System lets users interact with iPhone and iPod touch devices in ways that you cannot interact with desktop applications. The Multi-Touch interface is a revolutionary new way to receive events, reporting on each separate finger that touches the screen and making it possible to handle multi finger gestures and other complex input easily. In addition, built-in hardware features such as the accelerometers, although present in some desktop systems, are used more extensively in iPhone OS to track the screen’s current orientation and adjust your content accordingly. Understanding how you can use these features in your applications will help you focus on a design that is right for your users.

The best way to understand the design of an iPhone application is to look at an example. This article takes you on a tour of the MyApp sample application. This sample demonstrates many of the typical behaviors of an iPhone application, including:

* Initializing the application
* Displaying a window
* Drawing custom content
* Handling touch events
* Performing animations

“The MyApp application window” shows the interface for this application. Touching the Welcome button triggers an animation that causes the button to pulse and center itself under your finger. As you drag your finger around the screen, the button follows your finger. Lift your finger from the screen and, using another animation, the button snaps back to its original location. Double-tapping anywhere outside the button changes the language of the button’s greeting.

Figure 1 The MyApp application window

0

If you are not familiar with the Objective-C programming language, you should also have read Objective-C Primer to familiarize yourself with the basic syntax of Objective-C.

Examining the MyApp Sample Project

Downloading the MyApp sample provides you with the source code and support files needed to build and run the application. You manage projects for iPhone OS using the Xcode application (located in /Developer/Applications by default). Each Xcode project window combines a workspace for gathering your code and resource files, build rules for compiling your source and assembling your application, and tools for editing and debugging your code.

“The MyApp project window” shows the Xcode project window for the MyApp application. To open this project, copy it to your local hard drive and double-click the MyApp.xcodeproj file to open it. (You can also open the project from within Xcode by selecting File > Open and choosing the file.) The project includes several Objective-C source files (denoted by the .m extension), some image files and other resources, and a predefined target (MyApp) for building the application bundle.

Figure 2 The MyApp project window

1

In iPhone OS, the ultimate target of your Xcode project is an application bundle, which is a special type of directory that houses your application’s binary executable and supporting resource files. Bundles in iPhone OS have a relatively flat directory structure, with most files residing at the top level of the bundle directory. However, a bundle may also contain subdirectories to store localized versions of strings and other language-specific resource files.

Building the MyApp Application

To build the MyApp application and run it in the simulator, do the following:

1. Open the MyApp.xcodeproj file in Xcode.
2. In the project toolbar, make sure the simulator option is selected in the Active SDK menu. (If the Active SDK menu does not appear in the toolbar, choose Project > Set Active SDK > Simulator.)
3. Select Build > Build and Go (Run) from the menu, or simply click the Build and Go button in the toolbar.

When the application finishes building, Xcode loads it into the iPhone simulator and launches it. Using your mouse, you can click the Welcome button and drag it around the screen to see the application’s behavior. If you have a device configured for development, you can also build your application and run it on that device.

A Word About Memory Management

iPhone OS is primarily an object-oriented system, so most of the memory you allocate is in the form of Objective-C objects. Objects in iPhone OS use a reference-counting scheme to know when it is safe to free up the memory occupied by the object. When you first create an object, it starts off with a reference count of 1. Clients receiving that object can opt to retain it, thereby incrementing its reference count by 1. If a client retains an object, the client must also release that object when it is no longer needed. Releasing an object decrements its reference count by 1. When an object’s reference count equals 0, the system automatically reclaims the memory for the object.

Note: iPhone OS does not support memory management using the garbage collection feature that is in Mac OS X v10.5 and later.

If you want to allocate generic blocks of memory-that is, memory not associated with an object-you can do so using the standard malloc library of calls. As is the case with any memory you allocate using malloc, you are responsible for releasing that memory when you are done with it by calling the free function. The system does not release malloc-based blocks for you.

Regardless of how you allocate memory, managing your overall memory usage is more important in iPhone OS than it is in Mac OS X. Although iPhone OS has a virtual memory system, it does not use a swap file. This means that code pages can be flushed as needed but your application’s data must all fit into memory at the same time. The system monitors the overall amount of free memory and does what it can to give your application the memory it needs. If memory usage becomes too critical though, the system may terminate your application. However, this option is used only as a last resort, to ensure that the system has enough memory to perform critical operations such as receiving phone calls.

Initializing the MyApp Application

As is true for every C-based application, the initial entry point for every iPhone application is a function called main. The good news is that, when you create a new project using the iPhone templates in Xcode, you do not have to write this function yourself. The project templates include a version of this function with all the code needed to start your application.

“Using the provided main function” shows the main function for the MyApp application. The main function is located in that project’s main.m file. Every application you create will have a main function that is almost identical to this one. This function performs two key tasks. First, it creates the application’s top-level autorelease pool, whose job is to reclaim the memory for Objective-C objects that are freed using their autorelease method. Second, it calls the UIApplicationMain function to create the MyApp application’s key objects, initialize those objects, and start the event-processing loop. The application does not return from this function until it quits.

Listing 1 Using the provided main function
int main(int argc, char *argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
int retVal = UIApplicationMain(argc, argv, nil, nil);
[pool release];
return retVal;
}

Defining the Application Delegate

One of the most important architectural details of your project is defining the application delegate object, which is instantiated from a class you provide in your project. The application delegate class in MyApp project declares its interface in MyAppAppDelegate.h and defines its implementation in MyAppAppDelegate.m. Once you have added these files to the project, you can use Interface Builder to designate the class as the application delegate. Interface Builder is a visual tool that you use to create and arrange views in a window, set up view hierarchies, configure each view’s options, and establish relationships between the views and the other objects of your application. Because it is a visual tool, you perform all of these tasks by dragging components around a window surface. The result is an interactive version of your interface that you can see immediately and change in seconds. Interface Builder saves your user interface in a file known as a nib file, which is an archive of your application’s object graph.

To launch Interface Builder and see how the application delegate object’s role is defined, double-click the MainWindow.xib file (under MyApp > Resources) in the Groups & Files pane of the Xcode project window. MainWindow.xib is the nib file that contains your application’s window and defines the relationships among several important objects in your application, including the application delegate. To see how the application delegate relationship is established, click the File’s Owner icon in the nib file document window (titled “MainWindow.xib”), show the Inspector window (choose Tools > Inspector), and click the Inspector window’s Application Connections tab. As shown in “The application delegate”, the Inspector shows that the File’s Owner object (which represents the application in the nib file) has a delegate outlet connected to the MyAppAppDelegate object.

Figure 3 The application delegate

3

The application delegate object works in tandem with the standard UIApplication object to respond to changing conditions in the application. The application object does most of the heavy lifting, but the delegate is responsible for several key behaviors, including the following:

* Setting up the application’s window and initial user interface
* Performing any additional initialization tasks needed for your custom data engine
* Opening content associated with the application’s custom URL schemes
* Responding to changes in the orientation of the device
* Handling low-memory warnings
* Handling system requests to quit the application

At launch time, the most immediate concern for the delegate object is to set up and present the application window to the user, which is described in “Creating the Application Window”. The delegate should also perform any tasks needed to prepare your application for immediate use, such as restoring the application to a previous state or creating any required objects. When the application quits, the delegate needs to perform an orderly shutdown of the application and save any state information needed for the next launch cycle.

Creating the Application Window

Every application is responsible for creating a window that spans the entire screen and for filling that window with content. Graphical applications running in iPhone OS do not run side-by-side with other applications. In fact, other than the kernel and a few low-level system daemons, your application is the only thing running after it is launched. What’s more, your application should never need more than one window-an instance of the UIWindow class. In situations where you need to change your user interface, you change the views displayed by your window.

Windows provide the drawing surface for your user interface, but view objects provide the actual content. A view object is an instance of the UIView class that draws some content and responds to interactions with that content. iPhone OS defines standard views to represent things such as tables, buttons, text fields, and other types of interactive controls. You can add any of these views to your window, or you can define custom views by subclassing UIView and implementing some custom drawing and event-handling code. The MyApp application defines two such views-represented by the MyAppView and PlacardView classes-to display the application’s interface and handle user interactions.

At launch time, the goal is to create the application window and display some initial content as quickly as possible. The window is unarchived from the MainWindow.xib nib file. When the application reaches a state where it is launched and ready to start processing events, the UIApplication object sends the delegate an applicationDidFinishLaunching: message. This message is the delegate’s cue to put content in its window and perform any other initialization the application might require.

In the MyApp application, the delegate’s applicationDidFinishLaunching: method does the following:

1. It creates a view controller object whose job is to manage the content view of the window.
2. It initializes the view controller with an instance of the MyAppView class, which is stored in the MyAppView.xib nib file, to act as the background view and fill the entire window frame.
3. It adds the controller’s view as a subview of the window.
4. It shows the window.

“Creating the content view” shows the applicationDidFinishLaunching: method for the MyApp application, which is defined in the application delegate’s implementation file, MyAppAppDelegate.m. This method creates the main content view for the window and makes the window visible. Showing the window lets the system know that your application is ready to begin handling events.

Listing 2 Creating the content view
- (void)applicationDidFinishLaunching:(UIApplication *)application
{
// Set up the view controller
UIViewController *aViewController = [[UIViewController alloc]
initWithNibName:@”MyAppView” bundle:[NSBundle mainBundle]];
self.viewController = aViewController;
[aViewController release];
// Add the view controller’s view as a subview of the window
UIView *controllersView = [viewController view];
[window addSubview:controllersView];
[window makeKeyAndVisible];
}

Note: You can use the applicationDidFinishLaunching: method to perform other tasks besides setting up your application user interface. Many applications use it to initialize required data structures, read any user preferences, or return the application to the state it was in when it last quit.

Although the preceding code creates the window’s background view and then shows the window, what you do not see in the preceding code is the creation of the PlacardView class that displays the Welcome button. That behavior is handled by the setUpPlacardView method of the MyAppView class, which is called from the initWithCoder: method called when the MyAppView object is unarchived from its nib file. The setUpPlacardView method is shown in “Creating the placard view”. Part of the initialization of this view includes the creation of a PlacardView object. Because the MyAppView class provides the background for the entire application, it adds the PlacardView object as a subview. The relationship between the two views not only causes the Welcome button to be displayed on top of the application’s background, it also allows the MyAppView class to handle events that are targeted at the button.

Listing 3 Creating the placard view
- (void)setUpPlacardView
{
// Create the placard view — it calculates its own frame based on its image
PlacardView *aPlacardView = [[PlacardView alloc] init];
self.placardView = aPlacardView;
[aPlacardView release];
placardView.center = self.center;
[self addSubview:placardView];
}

Drawing the Welcome Button

The standard views provided by UIKit can be used without modification to draw many types of simple content. For example, you can use the UIImageView class to display images and the UILabel class to display text strings. The MyAppView class in the MyApp application also takes advantage of a basic property of all UIView objects-specifically, the backgroundColor property-to fill the view with a solid color. This property can be set in code in the view object’s initialization method. In this case, the property is set when MyAppView is created in the MyAppView.xib nib file, using a color well in the Attributes tab of the Inspector window of Interface Builder. When you need to draw content dynamically, however, you must use the more advanced drawing features found in UIKit or you must use Quartz or OpenGL ES.

The PlacardView class in the MyApp application draws the Welcome button and manages its location on the screen. Although the PlacardView class could draw its content using an embedded UIImageView and UILabel object, it instead draws the content explicitly, to demonstrate the overall process. As a result, this class implements a drawRect: method, which is where all custom drawing for a view takes place.

By the time a view’s drawRect: method is called, the drawing environment is configured and ready to go. All you have to do is specify the drawing commands to draw any custom content. In the PlacardView class, the content consists of a background image (stored in the Placard.png resource file) and a custom string, the text for which can change dynamically. To draw this content, the class takes the following steps:

1. Draw the background image at the view’s current origin. (Because the view is already sized to fit the image, this step provides the entire button background.)
2. Compute the position of the welcome string so that it is centered in the button. (Because the string size can change, the position needs to be computed each time based on the current string size.)
3. Set the drawing color to black.
4. Draw the string in black, and slightly offset.
5. Set the drawing color to white.
6. Draw the string again in white at its intended location.

“Drawing the Welcome button” shows the drawRect: method for the PlacardView class. The placardImage member variable contains a UIImage object with the background for the button and the currentDisplayString member variable is an NSString object containing the welcome string. After drawing the image, this method calculates the position of the string within the view. The size of the string is already known, having been calculated when the string was loaded and stored in the textSize member variable. The string is then drawn twice-once in black and once in white-using the drawAtPoint:forWidth:withFont:fontSize:lineBreakMode:baselineAdjustment: method of NSString.

Listing 4 Drawing the Welcome button
- (void)drawRect:(CGRect)rect
{
// Draw the placard at 0, 0
[placardImage drawAtPoint:(CGPointMake(0.0, 0.0))];
/*
Draw the current display string.
This could be done using a UILabel, but this serves to illustrate
the UIKit extensions to NSString. The text is drawn center of the
view twice - first slightly offset in black, then in white — to give
an embossed appearance. The size of the font and text are calculated
in setupNextDisplayString.
*/
// Find point at which to draw the string so it will be in the center of the view
CGFloat x = self.bounds.size.width/2 - textSize.width/2;
CGFloat y = self.bounds.size.height/2 - textSize.height/2;
CGPoint point;
// Get the font of the appropriate size
UIFont *font = [UIFont systemFontOfSize:fontSize];
[[UIColor blackColor] set];
point = CGPointMake(x, y + 0.5);
[currentDisplayString drawAtPoint:point
forWidth:(self.bounds.size.width-STRING_INDENT)
withFont:font
fontSize:fontSize
lineBreakMode:UILineBreakModeMiddleTruncation
baselineAdjustment:UIBaselineAdjustmentAlignBaselines];
[[UIColor whiteColor] set];
point = CGPointMake(x, y);
[currentDisplayString drawAtPoint:point
forWidth:(self.bounds.size.width-STRING_INDENT)
withFont:font
fontSize:fontSize
lineBreakMode:UILineBreakModeMiddleTruncation
baselineAdjustment:UIBaselineAdjustmentAlignBaselines];
}

When you need to draw content that is more complex than images and strings, you can use Quartz or OpenGL ES. Quartz works with UIKit to handle the drawing of vector-based paths, images, gradients, PDF, and other complex content that you want to create dynamically. Because Quartz and UIKit are based on the same drawing environment, you can call Quartz functions directly from the drawRect: method of your view and even mix and match Quartz calls through the use of UIKit classes.

OpenGL ES is an alternative to Quartz and UIKit that lets you render 2D and 3D content using a set of functions that resemble (but are not exactly like) those found in OpenGL for Mac OS X. Unlike Quartz and UIKit, you do not use your view’s drawRect: method to do your drawing. You still use a view, but you use that view object primarily to provide the drawing surface for your OpenGL ES code. How often you update the drawing surface, and which objects you use to do so, are your decision.

Handling Touch Events

The Multi-Touch interface in iPhone OS makes it possible for your application to recognize and respond to distinct events generated by multiple fingers touching the device. The ability to respond to multiple fingers offers considerable power but represents a significant departure from the way traditional, mouse-based event-handling systems operate. As each finger touches the surface of the device, the touch sensor generates a new touch event. As each finger moves, additional touch events are generated to indicate the finger’s new position. When a finger loses contact with the device surface, the system delivers yet another touch event to indicate that fact.

Because there may be multiple fingers touching the device at one time, it is possible for you to use those events to identify complex user gestures. The system provides some help in detecting common gestures such as swipes, but you are responsible for detecting more complex gestures. When the event system generates a new touch event, it includes information about the current state of each finger that is either touching or was just removed from the surface of the device. Because each event object contains information about all active touches, you can monitor the actions of each finger with the arrival of each new event. You can then track the MyAppnts of each finger from event to event to detect gestures, which you can apply to the contents of your application. For example, if the events indicate the user is performing a pinch-close or pinch-open gesture (as shown in “Using touch events to detect gestures”) and the underlying view supports magnification, you could use those events to change the current zoom level.

Figure 4 Using touch events to detect gestures

4

The system delivers events to the application’s responder objects, which are instances of the UIResponder class. In an iPhone application, your application’s views form the bulk of your custom responder objects. The MyApp application implements two view classes, but only the MyAppView class actually responds to event messages. This class detects taps both inside and outside the bounds of the Welcome button by overriding the following methods of UIResponder:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;

To simplify its own event-handling behavior, the MyApp application tracks only the first finger to touch the surface of the device. It does this with the support of the UIView class, which disables multi-touch events by default. For applications that do not need to track multiple fingers, this feature is a great convenience. When multi-touch events are disabled, the system delivers events only related to the first finger to touch the device. Events related to additional touches in a sequence are never delivered to the view. If you want the information for those additional touches, however, you can reenable multi-touch support using the setMultipleTouchEnabled: method of the UIView class.

As part of its event-handling behavior, the MyAppView class performs the following steps:

1. When a touch first arrives, it checks to see where the event occurred.
* Double-taps outside the Welcome button update the string displayed by the button.
* Single taps inside the button center the button underneath the finger and trigger an initial animation to enlarge the button.
* All other touches are ignored.
2. If the finger moves and is inside the button, the button’s position is updated to match the new position of the finger.
3. If the finger was inside the button and then lifts off the surface of the device, an animation moves the button back to its original position.

“Handling an initial touch event” shows the touchesBegan:withEvent: method for the MyAppView class. The system calls this method when a finger first touches the device. This method gets the set of all touches and extracts the one and only touch object from it. The information in the UITouch object is used to identify in which view the touch occurred (the MyAppView object or the PlacardView object) and the number of taps associated with the touch. If the touch represents a double tap outside the button, the touchesBegan:withEvent: method calls the setupNextDisplayString method to change the welcome string of the button. If the event occurred inside the Welcome button, it uses the animateFirstTouchAtPoint: method to grow the button and track it to the touch location. All other touch-related events are ignored.

Listing 5 Handling an initial touch event
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// We only support single touches, so anyObject
// retrieves just that touch from touches
UITouch *touch = [touches anyObject];
// Only move the placard view if the touch was in the placard view
if ([touch view] != placardView)
{
// In case of a double tap outside the placard view,
// update the placard’s display string
if ([touch tapCount] == 2)
{
[placardView setupNextDisplayString];
}
return;
}
// Animate the first touch
CGPoint touchPoint = [touch locationInView:self];
[self animateFirstTouchAtPoint:touchPoint];
}

“Responding to MyAppnt from a touch” shows the touchesMoved:withEvent: method of the MyAppView class. The system calls this method after the finger has touched the device and in response to it moving from its original location. The MyApp application tracks only those MyAppnts that occur within the Welcome button. As a result, this method checks the location of the event and uses it to adjust the center point of the PlacardView object. The MyAppnt of the view causes it to be redrawn at the new location automatically.

Listing 6 Responding to MyAppnt from a touch
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
// If the touch was in the placardView, move the placardView
// to its location
if ([touch view] == placardView)
{
CGPoint location = [touch locationInView:self];
placardView.center = location;
return;
}
}

When the user’s finger finally lifts from the screen, the MyApp application responds by triggering an animation to move the button back to its starting position in the center of the application’s window. “Releasing the Welcome button” shows the touchesEnded:withEvent: method that initiates the animation.

Listing 7 Releasing the Welcome button
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
// If the touch was in the placardView, bounce it back to the center
if ([touch view] == placardView)
{
// Disable user interaction so subsequent touches
// don’t interfere with animation
self.userInteractionEnabled = NO;
[self animatePlacardViewToCenter];
return;
}
}

To simplify the event handling process for the application, the touchesEnded:withEvent: method disables touch events for the view temporarily while the button animates back to its original position. If it did not do this, each of the event-handling methods would need to include logic to determine whether the button was in the middle of an animation and, if so, cancel the animation. Disabling user interactions for the short time it takes the button to travel back to the center of the screen simplifies the event handling code and eliminates the need for the extra logic. Upon reaching its original position, the animationDidStop:finished: method of the MyAppView class reenables user interactions so that the event cycle can begin all over again.

If the application is interrupted for some reason-for example, by an incoming phone call-the view is sent a touchesCancelled:withEvent: message. In this situation, the application should try to do as little work as possible to avoid competing for device resources. In the example implementation, the placard view’s center and transformation are simply set to their original values.
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
placardView.center = self.center;
placardView.transform = CGAffineTransformIdentity;
}

For more information on handling events in iPhone OS, see Event Handling in iPhone Application Programming Guide.

Animating the Button’s MyAppnt

In iPhone applications, animation plays a very important role. Animation is used extensively to provide the user with contextual information and immediate feedback. For example, when the user navigates hierarchical data in a productivity application, rather than just replace one screen with another, iPhone applications animate the MyAppnt of each new screen into place. The direction of MyAppnt indicates whether the user is moving up or down in the hierarchy and also provides a visual cue that there is new information to look at.

Because of its importance, support for animation is built into the classes of UIKit already. The MyApp application takes advantage of this support by using it to animate the different aspects of the Welcome button. When the user first touches the button, the application applies an animation that causes the size of the button to grow briefly. When the user lets go of the button, another animation snaps it back to its original position. The basic steps for creating these animations are essentially the same:

1. Call the beginAnimations:context: method of the view you want to animate.
2. Configure the animation properties.
3. Call the commitAnimations method of the view to begin the animation.

“Animating the Welcome button” shows the animation code used to pulse the Welcome button when it is first touched. This method sets the duration of the animation and then applies a transform to the button that scales it to its new size. When this animation completes, the animation infrastructure calls the growAnimationDidStop:finished:context: method of the animation delegate, which completes the pulse animation by shrinking the button slightly and moving the placard view under the touch.

Listing 8 Animating the Welcome button
- (void)animateFirstTouchAtPoint:(CGPoint)touchPoint
{
#define GROW_ANIMATION_DURATION_SECONDS 0.15
NSValue *touchPointValue = [[NSValue valueWithCGPoint:touchPoint] retain];
[UIView beginAnimations:nil context:touchPointValue];
[UIView setAnimationDuration:GROW_ANIMATION_DURATION_SECONDS];
[UIView setAnimationDelegate:self];
[UIView setAnimationDidStopSelector: @selector(growAnimationDidStop:finished:context:)];
CGAffineTransform transform = CGAffineTransformMakeScale(1.2, 1.2);
placardView.transform = transform;
[UIView commitAnimations];
}
- (void)growAnimationDidStop:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context
{
#define MOVE_ANIMATION_DURATION_SECONDS 0.15
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:MOVE_ANIMATION_DURATION_SECONDS];
placardView.transform = CGAffineTransformMakeScale(1.1, 1.1);
// Move the placard view under the touch
NSValue *touchPointValue = (NSValue *)context;
placardView.center = [touchPointValue CGPointValue];
[touchPointValue release];
[UIView commitAnimations];
}

Finishing the Application

In the preceding sections, you saw how the MyApp application was initialized, presented its user interface, and responded to events. In addition to those aspects of the application creation, there are also smaller details that need to be considered before building an application and loading it onto a device. One of the final pieces to put in place is your application’s information property-list (Info.plist) file. It is an XML file that communicates basic information about your application to the system. Xcode creates a default version of this file for you and inserts your application’s initial configuration information into it. You can extend this information, however, to provide additional details about your application that the system should know. For example, you would use this file to communicate information about your application version, any custom URL schemes it supports, its launch image, and the default visibility status and style of the system status bar.

“The contents of the Info.plist file” shows the contents of the Info.plist file for the MyApp application. This file identifies the name of the executable, the image file to display on the user’s Home screen, and the string that identifies the application uniquely to the system. Because the MyApp application is a full-screen application-in other words, it does not display the status bar-it also includes the UIStatusBarHidden key and assigns to it the value true. Setting this key to true lets the system know that it should not display the application status bar at launch time or while the application is running. Although the MyApp application could configure this same behavior programmatically, that behavior would not take effect until after the application was already launched, which might look odd.

Listing 9 The contents of the Info.plist file

“http://www.apple.com/DTDs/PropertyList-1.0.dtd”>


CFBundleDevelopmentRegion
en
CFBundleDisplayName
${PRODUCT_NAME}
CFBundleExecutable
${EXECUTABLE_NAME}
CFBundleIconFile
Icon.png
CFBundleIdentifier
com.yourcompany.${PRODUCT_NAME:identifier}
CFBundleInfoDictionaryVersion
6.0
CFBundleName
${PRODUCT_NAME}
CFBundlePackageType
APPL
CFBundleSignature
????
CFBundleVersion
1.0
UIStatusBarHidden

NSMainNibFile
MainWindow



Note: You can edit the contents of your application’s Info.plist file using TextEdit, which displays the XML contents of the file as shown in “The contents of the Info.plist file”, or the Property List Editor, which displays the file’s keys and values in a table. Xcode also provides access to some of these attributes in the information window for your application target. To view this window, select your application target (in the Targets group) and choose File > Get Info. The Properties tab contains some (but not all) of the properties in the Info.plist file.

With this final piece in place, you now have all of the basic information needed to create your own functional iPhone application. The next step is to expand on the information you learned here by learning more about the features of iPhone OS. The applications you create should take advantage of the built-in features of iPhone OS to create a pleasant and intuitive user experience. Some of these features are described in “Taking Your Applications Further”, but for a complete list, and for information on how to use them, see iPhone Application Programming Guide.

Taking Your Applications Further

There are many features associated with iPhone and iPod touch that users take for granted. Some of these features are hardware related, such as the automatic adjustment of views in response to a change in a device’s orientation. Others are software related, such as the fact that the built-in iPhone applications all share a single list of contacts. Because so many of the features described next are integral to the basic user experience, you should consider them during your initial design to see how they might fit into your application.

Tracking Orientation and Motion Using the Accelerometers

The accelerometers in iPhone and iPod touch provide valuable input for the system and for your own custom applications. An accelerometer measures changes in velocity along a single linear axis. Both iPhone and iPod touch have three accelerometers to measure changes along each of the primary axes in three-dimensional space, allowing you to detect motion in any direction.

Figure 5 Accelerometer axes

6

Although you might not think measuring changes in acceleration would be very useful, in reality there is a lot you can do with the information. The force of gravity is constantly trying to pull objects to the ground. This force results in a measurable amount of acceleration toward the ground even when the device is at rest. By tracking which accelerometers are registering this acceleration, and the extent of that acceleration, you can detect the physical orientation of a device in 3D space with a fair amount of accuracy. You can then apply this orientation as input to your application.

The system uses the accelerometers to monitor a device’s current orientation and to notify your application when that orientation changes. If your application’s interface can be displayed in both landscape and portrait mode, you should incorporate view controllers into your basic design. The UIViewController class provides the infrastructure needed to rotate your interface and adjust the position of views automatically in response to orientation changes.

If you want access to the raw accelerometer data directly, you can do so using the shared UIAccelerometer object in UIKit. The UIAccelerometer object reports the current accelerometer values at a configurable interval. You can also use the data to detect the device’s orientation or to detect other types of instantaneous motion, such as the user shaking the device back and forth. You can then use this information as input to a game or other application.

Accessing the User’s Contacts

The user’s list of contacts is an important resource that all system applications share. The Phone, Mail, and SMS Text applications use it to identify people the user needs to contact and to facilitate basic interactions such as starting a phone call, email, or text message. Your own applications can access this list of contacts for similar purposes or to get other information relevant to your application’s needs.

Figure 6 Accessing the user’s contacts

7

iPhone OS provides both direct access to the user’s contacts and indirect access through a set of standard picker interfaces. Using direct access, you can obtain the contact information directly from the contacts database. You might use this information in cases where you want to present contact information in a different way or filter it based on application-specific criteria. In cases where you do not need custom interface, however, iPhone OS also provides the set of standard system interfaces for picking and creating contacts. Incorporating these interfaces into your applications requires little effort but makes your application look and feel like it’s part of the system.

Getting the User’s Current Location

Devices that run iPhone OS are meant for users on the go. Therefore the software you write for these devices should also take this fact into account. And because the Internet and web make it possible to do business anywhere, being able to tailor information for the user’s current location can make for a compelling user experience. After all, why list coffee shops in New York for someone who is thirsty and currently in Los Angeles? That’s where the Core Location framework can help.

The Core Location framework monitors signals coming from cell phone towers and Wi-Fi hotspots and uses them to triangulate the user’s current position. You can use this framework to grab an initial location fix only, or you can be notified whenever the user’s location changes. With this information, you can filter the information your application provides or use it in other ways.

Playing Audio and Video

iPhone OS supports audio features in your application through the Core Audio and OpenAL frameworks, and provides video playback support using the Media Player framework. Core Audio provides an advanced interface for playing, recording, and manipulating sound and for parsing streamed audio. You can use it to play back simple sound effects or multichannel audio, mix sounds and position them in an audio field, and even trigger the vibrate feature of an iPhone. If you are a game developer and already have code that takes advantage of OpenAL, you can use your code in iPhone OS to position and play back audio in your games.

The Media Player framework is what you use to play back full-screen video files. This framework supports the playback of many standard movie file formats and gives you control over the playback environment, including whether to display user controls and how to configure the aspect ratio of video content. Game developers might use this framework to play cut scenes or other prerendered content, while media-based applications can also use this framework to play back movie files.

Figure 7 Playing back custom video

8

Taking Pictures with the Built-in Camera

The Camera application on iPhone lets users take pictures and store them in a centralized photo library, along with the other pictures they upload from their computer. And although the iPod touch has no camera, it does have a photo library to hold the user’s uploaded pictures. iPhone OS provides access to both of these features through the UIImagePickerController class in the UIKit framework.

Figure 8 The iPhone camera

9

The UIImagePickerController class provides the implementation for both the camera and photo library interfaces for your application. These are the standard system interfaces used by other applications, including the Camera and Photos applications. When you display the picker interface, the picker controller takes care of all of the required user interactions and returns the resulting image to your application.

Truely Inspiring Incident - Mr.Murthy - How lucky!!

Here is an excerpt from Life lessons from Narayana Murthy…Interesting read

The next event that left an indelible mark on me occurred in 1974. The location: Nis, a border town between former Yugoslavia, now Serbia, and Bulgaria. I was hitchhiking from Paris back to Mysore, India, my home town.

By the time a kind driver dropped me at Nis railway station at 9 p.m. on a Saturday night, the restaurant was closed. So was the bank the next morning, and I could not eat because I had no local money. I slept on the railway platform until 8.30 pm in the night when the Sofia Express pulled in.

The only passengers in my compartment were a girl and a boy. I struck a conversation in French with the young girl. She talked about the travails of living in an iron curtain country, until we were roughly interrupted by some policemen who, I later gathered, were summoned by the young man who thought we were criticising the communist government of Bulgaria.

The girl was led away; my backpack and sleeping bag were confiscated. I was dragged along the platform into a small 8×8 foot room with a cold stone floor and a hole in one corner by way of toilet facilities. I was held in that bitterly cold room without food or water for over 72 hours.

I had lost all hope of ever seeing the outside world again, when the door opened. I was again dragged out unceremoniously, locked up in the guard’s compartment on a departing freight train and told that I would be released 20 hours later upon reaching Istanbul. The guard’s final words still ring in my ears – ”You are from a friendly country called India and that is why we are letting you go!”

http://www.rediff.com/money/2007/may/28bspec.htm