Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Prashant Sutariya 1:12 pm on March 15, 2012 Permalink | Reply  

    Creating Web Services with PHP and SOAP, Part 2 

    In the first part of this series, I showed you how developing applications with the SOAP protocol is a great way to build interoperable software. I also demonstrated how easy it is to build your very own SOAP server and client using the NuSOAP library. This time around I’d like to introduce you to something that you will most definitely run into when working with SOAP – WSDL files.

    In this article we’ll talk about what WSDL files are and how to use them. I’ll show you how to quickly build your WSDL files with NuSOAP and incorporate a WSDL file into the SOAP server and client examples from the first part.

    What are WSDL Files?

    Web Services Description Language (WSDL) files are XML documents that provide metadata for a SOAP service. They contain information about the functions or methods the application makes available and what arguments to use. By making WSDL files available to the consumers of your service, it gives them the definitions they need to send valid requests precisely how you intend them to be. You can think of WSDL files as a complete contract for the application’s communication. If you truly want to make it easy for others to consume your service you will want to incorporate WSDL into your SOAP programming.

    WSDL Structure

    Just like SOAP messages, WSDL files have a specific schema to adhere to, and specific elements that must be in place to be valid. Let’s look at the major elements that make up a valid WSDL file and explain their uses.


    The root element of the WSDL file is the definitions element. This makes sense, as a WSDL file is by definition a definition of the web service. The types element describes the type of data used, which in the case of WSDL, XML schema is used. Within the messages element, is the definition of the data elements for the service. Each messages element can contain one or morepart elements. The portType element defines the operations that can be performed with your web service and the request response messages that are used. Within the binding element, contains the protocol and data format specification for a particular portType. Finally, we have theservice element which defines a collection of service element contains the URI (location) of the service.

    The terminology has changed slightly in naming some of the elements in the WSDL 2.0 specification. portType, for example, has changed its name to Interface. Since support for WSDL 2.0 is weak, I’ve chosen to go over version 1.1 which is more widely used.

    Building a WSDL File

    WSDL files can be cumbersome to write by hand as they must contain specific tags and are usually quite long. The nice thing about using NuSOAP is that it can create a WSDL file for you! Let’s modify the SOAP server we made in the first article to accommodate this.

    Open productlist.php and change it to reflect the code below:

    require_once “nusoap.php”;
    function getProd($category)
    $server = new soap_server();
    $server->configureWSDL(“productlist”, “urn:productlist”);
        array(“category” => “xsd:string”),
        array(“return” => “xsd:string”),
        “Get a listing of products by category”);

    Basically this is the same code as before but with only a couple of changes. The first change adds a call to configureWSDL(); the method acts as a flag to tell the server to generate a WSDL file for our service. The first argument is the name of the service and the second is the namespace for our service. A discussion of namespaces is really outside the scope of this article, but be aware that although we are not taking advantage of them here, platforms like Apache Axis and .NET do. It’s best to include them to be fully interoperable.

    The second change adds additional arguments to the register() method. Breaking it down:

    • getProd is the function name
    • array(“category” => “xsd:string”) defines the input argument to getProd and its data type
    • array(“return” => “xsd:string”) defines the function’s return value and its data type
    • urn:productlist defines the namespace
    • urn:productlist#getProd defines the SOAP action
    • rpc defines the type of call (this could be either rpc or document)
    • encoded defines the value for the use attribute (encoded or literal could be used)
    • The last parameter is a documentation string that describes what the getProd function does

    Now point your browser to http://yourwebroot/productlist.php?wsdl and you’ll see the brand new WSDL file created for you. Go ahead and copy that source and save it as it’s own file called products.wsdl and place it in you web directory.

    Consuming WSDL Files with the Client

    We’ve modified the SOAP server to generate a WSDL file, so now lets modify the SOAP client to consume it. Open up productlistclient.php created in the previous article and simply change the line that initiates the client from this:

    $client = newnusoap_client(“http://localhost/nusoap/productlist.php“);

    to this:

    $client = new nusoap_client(“products.wsdl”, true);

    The second parameter in the nusoap_client() constructor call tells NuSOAP to build a SOAP client to accept the WSDL file. Now launch productlistclient.php in your browser and you should see the same result as before, but now you’re using WSDL power!


    In part 2 of this series on creating web services with PHP and SOAP, we went over the importance of using WSDL files for optimum interoperability. We talked about the different elements that make up a WSDL file and their definitions, and then I showed you how to quickly and easily create your own WSDL files with the NuSOAP library. Finally, we modified our SOAP server and client to demonstrate how to use WSDL in your applications.

    As you can probably guess, I’ve just barely scraped the surface of what SOAP can do for you, but with these new tools you can provide an easy and well-accepted way of exposing web services to your users.

  • Techmodi 2:07 pm on March 12, 2012 Permalink | Reply  

    Techmodi among Top 10 providers on 

    Techmodi has achieved the ; place among the top 10 providers  on out of the 29,789 registered and active providers on The well deserved success has been a result of all the dedication and Quality work offered by Techmodi to all its clients globally.

    Elance Techmodi top 10 provider

    Elance Techmodi top 10 provider



  • Prashant Sutariya 7:35 am on March 2, 2012 Permalink | Reply  

    Creating Web Services with PHP and SOAP, Part 1 

    Untitled Document

    ’As application developers, the ability to develop software and services for a wide range of platforms is a necessary skill, but not everyone uses the same language or platform and writing code to support them all is not feasible. If only there was a standard that allowed us to write code once and allow others to interact with it from their own software with ease. Well luckily there is… and its name is SOAP. (SOAP used to be an acronym which stood for Simple Object Access Protocol, but as of version 1.2 the protocol goes simply by the name SOAP.)

    SOAP allows you to build interoperable software and allows others to take advantage of your software over a network. It defines rules for sending and receiving Remote Procedure Calls (RPC) such as the structure of the request and responses. Therefore, SOAP is not tied to any specific operating system or programming language. As that matters is someone can formulate and parse a SOAP message in their chosen language

    In this first of a two part series on web services I’ll talk about the SOAP specification and what is involved in creating SOAP messages. I’ll also demonstrate how to create a SOAP server and client using the excellent NuSOAP library to illustrate the flow of SOAP. In the second part I’ll talk about the importance of WSDL files, how you can easily generate them with NuSOAP as well, and how a client may use a WSDL file to better understand your web service.

    The Structure of a SOAP Message

    SOAP is based on XML so it is considered human read, but there is a specific schema that must be adhered to. Let’s first break down a SOAP message, stripping out all of its data, and just look at the specific elements that make up a SOAP message.

    <?xml version=”1.0″?>

    This might look like just an ordinary XML file, but what makes it a SOAP message is the root element Envelope with the namespace soap as The soap:encodingStyle attribute determines the data types used in the file, but SOAP itself does not have a default encoding.

    soap:Envelope is mandatory, but the next element, soap:Header, is optional and usually contains information relevant to authentication and session handling. The SOAP protocol doesn’t offer any built-in authentication, but allows developers to include it in this header tag.

    Next there’s the required soap:Body element which contains the actual RPC message, including method names and, in the case of a response, the return values of the method. The soap:Faultelement is optional; if present, it holds any error messages or status information for the SOAP message and must be a child element of soap:Body.

    Now that you understand the basics of what makes up a SOAP message, let’s look at what SOAP request and response messages might look like. Let’s start with a request.

    <?xml version=”1.0″?>
     <soap:Body xmlns:m=”“>

    Above is an example SOAP request message to obtain the stock price of a particular company. Inside soap:Body you’ll notice the GetStockPrice element which is specific to the application. It’s not a SOAP element, and it takes its name from the function on the server that will be called for this request. StockName is also specific to the application and is an argument for the function.

    The response message is similar to the request:

    <?xml version=”1.0″?>
     <soap:Body xmlns:m=”“>

    Inside the soap:Body element there is a GetStockPriceResponse element with a Price child that contains the return data. As you would guess, both GetStockPriceResponse and Priceare specific to this application.

    Now that you’ve seen an example request and response and understand the structure of a SOAP message, let’s install NuSOAP and build a SOAP client and server to demonstrate generating such messages.

    Building a SOAP Server

    It couldn’t be easier to get NuSOAP up and running on your server; just, download and unzip the package in your web root direoctry, and you’re done. To use the library just include the nusoap.php file in your code.

    For the server, let’s say we’ve been given the task of building a service to provide a listing of products given a product category. The server should read in the category from a request, look up any products that match the category, and return the list to the user in a CSV format.

    Create a file in your web root named productlist.php with the following code:

    require_once “nusoap.php”;
    function getProd($category)
    $server = new soap_server();

    First, the nusoap.php file is included to take advantage of the NuSOAP library. Then, thegetProd() function is defined. Afterward, a new instance of the soap_server class is instantiated, the getProd() function is registered with its register() method.

    This is really all that’s needed to create your own SOAP server – simple, isn’t it? In a real-world scenario you would probably look up the list of books from a database, but since I want to focus on SOAP, I’ve mocked getProd() to return a hard-coded list of titles.

    If you want to include more functionality in the sever you only need to define the additional functions (or even methods in classes) and register each one as you did above.

    Now that we have a working server, let’s build a client to take advantage of it.

    Building a SOAP Client

    Create a file named productlistclient.php and use the code below:

    require_once “nusoap.php”;
    $client = newnusoap_client(“http://localhost/nusoap/productlist.php“);
    $error = $client->getError();
    if ($error)
    $result = $client->call(“getProd”, array(“category” => “books”));
    if ($client->fault)

    Once again we include nusoap.php with require_once and then create a new instance ofnusoap_client. The constructor takes the location of the newly created SOAP server to connect to. The getError() method checks to see if the client was created correctly and the code displays an error message if it wasn’t.

    The call() method generates and sends the SOAP request to call the method or function defined by the first argument. The second argument to call() is an associate array of arguments for the RPC. The fault property and getError() method are used to check for and display any errors. If no there are no errors, then the result of the function is outputted.

    Now with both files in your web root directory, launch the client script (in my casehttp://localhost/nusoap/productlistclient.php) in your browser. You should see the following:


    If you want to inspect the SOAP request and response messages for debug purposes, or if you just to pick them apart for fun, add these lines to the bottom of productlistclient.php:

    echo “<h2>Request</h2>”;
    echo “<pre>” . htmlspecialchars($client->request, ENT_QUOTES) . “</pre>”;
    echo “<h2>Response</h2>”;
    echo “<pre>” . htmlspecialchars($client->response, ENT_QUOTES) . “</pre>”;

    The HTTP headers and XML content will now be appended to the output.


    In this first part of the series you learned that SOAP provides the ability to build interoperable software supporting a wide range of platforms and programming languages. You also learned about the different parts of a SOAP message and built your own SOAP server and client to demonstrate how SOAP works.

    In the next part I’ll take you deeper into the SOAP rabbit hole and explain what a WSDL file is and how it can help you with the documentation and structure of your web service.

  • Prashant Sutariya 7:34 am on March 2, 2012 Permalink | Reply  

    Working with Dates and Times in PHP 

    time zones

    When working in any programming language, dealing with dates and time is often a trivial and simple task. That is, until time zones have to be supported. Fortunately, PHP has one of the most potent set of date/time tools that help you deal with all sorts of time-related issues: UNIX timestamps, formatting dates for human consumption, displaying times with time zones, the difference between now and the second Tuesday of next month, etc. In this article I’ll introduce you to the basics of PHP’s time functions (time(), mktime(), and date()) and their object-oriented counterparts and show you how to make them play nicely with PHP.

    PHP Date and Time Functions

    Much of this article will work with UNIX time or POSIX or epoch time as it is otherwise known. Time is represented as an offset in the amount of seconds that have ticked away since midnight of January 1, 1970, UTC. If you’re interested in a complete history of UNIX time, check out the UNIX time article on Wikipedia.

    UTC, also known by its full name Coordinated Universal Time, also referred to as GMT, and sometimes Zulu time, is the time at 0-degrees longitude.  All other time zones in the world are expressed as a positive or negative offsets from this time. Treating time in UTC and Unix time will make your life easier when you need to deal with time zones. I’ll talk more on this later, but let’s ignore time zone issues for now and look at some time functions.

    Getting the Current UNIX Time

    time() takes no arguments and returns the number of seconds since the Unix epoch. To illustrate this, I will use the PHP interactive CLI shell.

    sean@beerhaus:~$ php -a
    php > print time();

    If you need an array representation of the Unix time, use the getdate() function. It takes an optional Unix timestamp argument, but defaults to the value of time() if one isn’t provided.

    php > $unixTime = time();
    php > print_r(getdate($unixTime));
       [seconds] => 48
       [minutes] => 54
       [hours] => 12
       [mday] => 20
       [wday] => 2
       [mon] => 12
       [year] => 2011
       [yday] => 353
       [weekday] => Tuesday
       [month] => December
       [0] => 1324403688

    Formatting a UNIX Time

    Unix time can be easily formatted into just about any string that a human would want to is used to format Unix timestamps into a human readable string, and takes a formatting argument and an optional time argument. If the optional timestamp is not provided, the value oftime() is used.

    php > print date("r", $unixTime);
    Tue, 20 Dec 2011 12:54:48 -0500

    The “r” formatting string returns the time formatted as specified by RFC 2822. Of course, you can use other specifiers to define your own custom formats.

    php > print date("m/d/y h:i:s a", $unixTime);
    12/20/11 12:54:48 pm
    php > print date("m/d/y h:i:s a");
    12/20/11 01:12:11 pm
    php > print date("jS \of F Y", $unixTime);
    20th of December 2011

    For the entire list of acceptable formatting characters, see the page for date() in the PHP documentation. The function becomes more useful though when combined with the mktime()and strtotime() functions, as you’ll see in the coming examples.

    Creating UNIX Time from a Given Time

    mktime() is used to create a Unix timestamp given a list of values that correspond to each part of a date (seconds, minutes, hours, year, etc). It takes a number of integer arguments to set each part of the date in this order:

    mktime(hour, minute, second, month, day, year, isDST)

    You set isDST to 1 if daylight savings is in effect, 0 if it’s not, and -1 if it’s unknown (the default value).

    php > print date("r", mktime(12, 0, 0, 1, 20, 1987));
    Tue, 20 Jan 1987 12:00:00 -0500
    php > print date("r", mktime(0, 0, 0, date("n"), date("j"), date("Y")));
    Tue, 20 Dec 2011 00:00:00 -0500
    php > print date("r", mktime(23, 59, 59, date("n"), date("j"), date("Y")));
    Tue, 20 Dec 2011 23:59:59 -0500

    You can see that mktime() can be very helpful when dealing with database queries that use date ranges customized by a user. For example, if you’re storing timestamps as integers (UNIX time) in MySQL (foreshadowing anyone?), it’s very easy to set up a common year-to-date query range.

    $startTime = mktime(0, 0, 0, 1, 1, date(“y”));
    $endTime   = mktime(0, 0, 0, date(“m”), date(“d”), date(“y”));

    Parsing an English Date to UNIX Time

    The almost magical function strtotime() takes a string of date/time formats as its first argument, and a Unix timestamp to use as the basis for the conversion. See the documentation for acceptable date formats.

    php > print strtotime("now");
    php > print date("r", strtotime("now"));
    Tue, 20 Dec 2011 14:01:51 -0500
    php > print strtotime("+1 week");
    php > print date("r", strtotime("+1 week"));
    Tue, 27 Dec 2011 14:03:03 -0500
    php > print date("r", strtotime("next month"));
    Fri, 20 Jan 2012 14:04:20 -0500
    php > print date("r", strtotime("next month", mktime(0, 0, 0)));
    Fri, 20 Jan 2012 00:00:00 -0500
    php > print date("r", strtotime("next month", mktime(0, 0, 0, 1, 31)));
    Thu, 03 Mar 2011 00:00:00 -0500

    PHP’s DateTime and DateTimeZone Objects

    PHP’s DateTime object is the object-oriented approach to dealing with dates and time zones. The constructor method accepts a string representation of a time, very similar to strtotime() above, and some might find this more pleasant to work with. The default value if no argument is provided is “now”.

    php > $dt = new DateTime("now"); 
    php > print $dt->format("r");
    Tue, 20 Dec 2011 16:28:32 -0500
    php > $dt = new DateTime("December 31 1999 12:12:12 EST");
    php > print $dt->format("r");
    Fri, 31 Dec 1999 12:12:12 -0500

    DateTime’s format() method works just like the date() function above, and accepts all of the same formatting characters. DateTime objects also come with a few useful constants that can be fed to the format() method.

    php > print $dt->format(DATE_ATOM);
    php > print $dt->format(DATE_ISO8601);
    php > print $dt->format(DATE_RFC822);
    Tue, 20 Dec 11 15:57:45 -0500
    php > print $dt->format(DATE_RSS);
    Tue, 20 Dec 2011 15:57:45 -0500

    The complete list of constants can be found on the DateTime documentation page.

    Since we will soon be dealing with time zones, let’s give PHP a default time zone to use. In yourphp.ini configuration file (I have one for CLI and one for Apache) find the section that looks like this:

    ; Defines the default timezone used by the date functions
    ; date.timezone =

    When no value is given to date.timezone, PHP will try its best to determine the system time zone as set on your server.  You can check which value PHP is using withdate_default_timezone_get().

    php > print date_default_timezone_get();

    Let’s set the time zone of the server to UTC time (date.timezone = UTC) and save the configuration file. You will have to restart Apache or the CLI shell to see the changes.

    PHP DateTime objects include an internal DateTimeZone class instance to track time zones.  When you create a new instance of DateTime, the internal DateTimeZone should be set to your default provided in php.ini.

    php > $dt = new DateTime();
    php > print $dt->getTimeZone()->getName();

    The complete list of acceptable time zone names can be found on the time zone documentation page.

    You can now see the difference in times when two DateTime objects are given different time zones. For example, here’s a sample that converts from UTC to America/New_York (EST) time.

    php > $dt = new DateTime();
    php > print $dt->format("r");
    Tue, 20 Dec 2011 20:57:45 +0000
    php > $tz = new DateTimeZone("America/New_York");
    php > $dt->setTimezone($tz);
    php > print $dt->gt;format("r");
    Tue, 20 Dec 2011 15:57:45 -0500

    Notice the -0500 offset for the month of December.  If you change the the time value to a summer date, such as July 1, you’ll see it is aware of Daylight Savings Time (EDT).

    php > $tz = new DateTimeZone("America/New_York");
    php > $july = new DateTime("7/1/2011");
    php > $july->setTimezone($tz);
    php > print $july->>format("r");
    Thu, 30 Jun 2011 20:00:00 -0400


    Dealing with dates and time zones are an everyday part of many programmers’ lives, but it’s nothing to worry about when you have PHP’s robust and easy-to-use date libraries to work with.

    You’ve seen how easy it is to get a UNIX timestamp, how to format a date into any imaginable format, how to parse an English representation of a date in to a timestamp, how to add a period of time to a timestamp, and how to convert between time zones. If there are two main points to take away from the article, they would be to 1) stick with UNIX time and 2) stick with UTC as the base time zone for all dates when working with PHP.

    The idea of basing all time on UTC is not one that only applies to PHP; it’s considered good practice in any language. And if you ever find yourself working in another language, there is a good chance you will say to yourself “Dammit, why can’t they do it like PHP?”

  • Prashant Sutariya 5:43 am on February 24, 2012 Permalink | Reply
    Tags: Software Testing, Testing, Web Testing   

    20 Top Practical Testing Tips A Tester Should Know 

    Testing doesn’t stop with de bugging. It is very rare to come across all kind of scenarios at a single instant while testing. After all testers learn all these testing practices by experience and here are Top 20 practical software testing tips a tester should read before testing any application.

    1) Analyze your test results Troubleshooting the root cause of failure will lead you to the solution of the problem, thus analyzing is very much needed, proper analyzing may get you out of all the possible mistakes. Bugs in softwares are introduced by both man and machine and some of the other practical reasons for the occurrence of bugs are miscommunication, software complexity, Programming errors, changing requirements, time pressures and reluctance.

    2) Maximized Test coverage Make use of the entire possible tool for testing application. It can be done by trial and error method for better results, but practically it is impossible to include all the testing methods therefore it is advised to use the testing methods which gave best results earlier. Selecting a testing tool from a QA perspective will result in producing media verification, release scenario and Decision to release the product.

    3) Ensure maximum test coverage Breaking your Application Under Test (AUT) in to smaller functional modules will help you to cover the maximum testing applications also if possible break these modules into smaller parts and here is an example to do so.

    E.g: Let’s assume you have divided your website application in modules and accepting user information is one of the modules. You can break this User information screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the User information form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage.

    4) While writing test cases First preference should be given to intended functionality before writing a test case and then for invalid conditions. This will cover expected as well unexpected behavior of application under test.

    Some of the Cases should be considered while testing web applications. • Functionality Testing • Performance Testing • Usability Testing • Server Side Interface • Client Side Compatibility • Security

    5) Error finding attitude Being a software Tester or a QA engineer you must stay curious about finding a bug in an application, existence of subtle bugs may even crash the entire system. So finding such a subtle bug is most challenging work and it gives you satisfaction of your work and to remain positive.

    6) Test cases in requirement analysis Designing the pre requirements about the test cases and analysis can help you to ensure that all the cases are testable.

    7) Availability of test cases to developers. Let developers analyze your test cases thoroughly to develop quality application. Letting them to do your work will help them to stay vigil while coding. This is also a time consuming scenario which will help you to release a quality product.

    8) To ensure quick testing If possible identify and group your test cases for regression testing. This will ensure quick and effective manual regression testing.

    9) Performance testing When it comes to the case of applications it consumes critical response time, therefore it must be given highest priority by choosing performance testing. But at many instants performance testing is avoided as it requires large data volume.

    10) Avoid testing your own code Developers are not good testers, none of the developers like to be blamed for their work because they remain optimistic when it comes to their product and they tend to skip their bugs as the person who develops the code generally sees only happy paths of the product and don’t want to go in much detail.

    11) Testing requirement has no limits Sky is the only limit for testing an application, use all the available means for testing application to improve the quality.

    12) Advantage of previous bug graph Using a previous graph will be an aid for finding bugs against different time modules, especially while doing regression testing. This module-wise bug graph can be useful to predict the most probable bug part of the application.

    13) Review your Test process Keep in track with your test results, these results may teach you a lot about learning new things. Keep a text file open while testing an application and use these notepad observations while preparing final test release report. This good habit will help you to provide the complete unambiguous test report and release details.

    14) Importance of code changes When it comes to the banking projects it requires lots of steps in development or testing environment to avoid execution of live transaction processing. Therefore note down all the changes done for the testing purpose, as testers or developers make changes in code base for application under test.

    15) Stay away – Developers If developers don’t have access to testing environment they will not do any such changes accidentally on test environment and these missing things can be captured at the right place.

    16) Role of a tester in design When you bring in testers right from software requirement and design phase it is obvious they will also become a part of development hence request to your lead or manager to involve your testing team in all decision making processes or meetings. In this way testers can get knowledge of application dependability resulting in detailed test coverage.

    17) Rapport with the other testing team Holding a good relationship with your co testers from other team helps both the parties to share best of their testing experience.

    18) Together testers and developers Do not keep anything verbal. To know about more details of the product, testers should relate with the developers, maintaining such kind of relationship will resolve more issues which are coming up in the product in the initial stage, make sure to communicate the same over written communication ways like emails.

    19) Timing priority Analyzing all risks helps a lot to prioritize work and it is the first stage of implementing time saving method. From this you can avoid wasting time.

    20) Importance of final report Testing is a creative and challenging task and do not fail to create a clear report about the bugs and possible solutions. This will remain as a record for Do’s and Dont’s in testing for future generation.’

  • Prashant Sutariya 4:47 am on February 24, 2012 Permalink | Reply
    Tags: Amazon AWS, Amazon EC2, Cloud Computing, Cloud Hosting, EC2   

    AWS Free Usage Tier now includes Amazon EC2 instances 

    Amazon is excited to announce that the AWS (Amazon Web Services) Free Usage Tier now includes Amazon EC2 instances running Microsoft Windows Server. Customers eligible for the AWS Free Usage Tier can now use up to 750 hours per month of t1.micro instances running Microsoft Windows Server for free. With this announcement, customers familiar with Windows Server can gain hands-on experience with AWS at no cost. Customers can select from a range of pre-configured Amazon Machine Images with Microsoft Windows Server 2008 R2. Once running, customers can connect via Microsoft Remote Desktop Client to begin building, migrating, testing, and deploying their web applications on AWS in minutes. The expanded Free Usage Tier with Microsoft Windows Server t1.micro instances is available today in all regions, except for AWS GovCloud. For more information about the AWS Free Usage Tier, please visit the AWS Free Usage Tier web page. To get started using Microsoft Windows Server on AWS, visit the AWS Windows web page.

    AWS Free Usage Tier

    To help new AWS customers get started in the cloud, AWS is introducing a free usage tier. New AWS customers will be able to run a free Amazon EC2 Micro Instance for a year, while also leveraging a free usage tier for Amazon S3, Amazon Elastic Block Store, Amazon Elastic Load Balancing, and AWS data transfer. AWS’s free usage tier can be used for anything you want to run in the cloud: launch new applications, test existing applications in the cloud, or simply gain hands-on experience with AWS.

    Below are the highlights of AWS’s free usage tiers. All are available for one year (except SWF, DynamoDB, SimpleDB, SQS, and SNS which are free indefinitely):

    AWS Free Usage Tier (Per Month):

    • 750 hours of Amazon EC2 Linux Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month*
    • 750 hours of Amazon EC2 Microsoft Windows Server Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month*
    • 750 hours of an Elastic Load Balancer plus 15 GB data processing*
    • 30 GB of Amazon Elastic Block Storage, plus 2 million I/Os and 1 GB of snapshot storage*
    • 5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests*
    • 100 MB of storage, 5 units of write capacity, and 10 units of read capacity for Amazon DynamoDB.**
    • 25 Amazon SimpleDB Machine Hours and 1 GB of Storage**
    • 1,000 Amazon SWF workflow executions can be initiated for free. A total of 10,000 activity tasks, signals, timers and markers, and 30,000 workflow-days can also be used for free**
    • 100,000 Requests of Amazon Simple Queue Service**
    • 100,000 Requests, 100,000 HTTP notifications and 1,000 email notifications for Amazon Simple Notification Service**
    • 10 Amazon Cloudwatch metrics, 10 alarms, and 1,000,000 API requests**
    • 15 GB of bandwidth out aggregated across all AWS services*

    In addition to these services, the AWS Management Console is available at no charge to help you build and manage your application on AWS.

    * These free tiers are only available to new AWS customers, and are available for 12 months following your AWS sign-up date. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay standard, pay-as-you-go service rates (see each service page for full pricing details). Restrictions apply; see offer terms for more details.

    ** These free tiers do not expire after 12 months and are available to both existing and new AWS customers indefinitely.

  • Prashant Sutariya 8:18 am on February 14, 2012 Permalink | Reply  

    Creating Flashy Menu 

    Today We will going to learn how to create a flashy menu using CSS3. This tutorial is for beginners and can be completed fairly quickly.  To see the demo click here.

    The HTML

    Our HTML document contains an unordered list and each list item is a link with an anchor tag. The span contains the name of the menu item.

    1. <ul class=”main-ul”>
    2.      <li><a href=”#”><span>Home</span></a></li>
    3.      <li><a href=”#”><span>Article</span></a></li>
    4.      <li><a href=”#”><span>Blog</span></a></li>
    5.      <li><a href=”#”><span>Gallery</span></a></li>
    6.      <li><a href=”#”><span>About</span></a></li>
    7.      <li><a href=”#”><span>Contact Us</span></a></li>
    8.      <li><a href=”#”><span>Alumini</span></a></li>
    9.      <li><a href=”#”><span>Portfolio</span></a></li>
    10. </ul>

    The CSS

    Now let’s position menu list items. I am using 25% width for each item, so in each row four menu items can be positioned. I’m aligning text of each list item to center.

    1. body{
    2.      background: #eee url(../images/white_paperboard.png) repeat top right;
    3. }
    4. .main-ul li {
    5.      float:left;
    6.      position:relative;
    7.      width:25%;
    8.      text-align:center;
    9. }

    Next let’s position each anchor tag and change text decoration to none. I am using a light gray color background. I am also adding CSS3 transition effects to these elements with a duration of one second.

    1. .main-ul li a {
    2.      display:block;
    3.      padding-bottom:20px;
    4.      padding-right:10px;
    5.      padding-top:10px;
    6.      padding-left:10px;
    7.      text-decoration:none;
    8.      position: relative;
    9.      z-index: 100;
    10.      background-color: rgba(164, 164, 164, 0.2);
    11.      -webkit-transition: all 1s;
    12.      -moz-transition: all 1s;
    13.      -o-transition: all 1s;
    14.      transition: all 1s;
    15. }

    I am using ‘Kotta One’ font for span text, normal font size and weight of 20px and 700 respectively. I’ve made the font color for the text in its hover state white.

    1. .main-ul li a span{
    2.      display:block;
    3.      padding-top:10px;
    4.      font-weight:700;
    5.      font-size: 20px;
    6.      color: rgba(120, 120, 120, 0.9);
    7.      text-transform:uppercase;
    8.      font-family: ‘Kotta One’, serif;
    9. }
    10. .main-ul li:hover span{
    11.      color: #fff;
    12. }

    Here comes our main part, I have already added transition effect for the anchor tags. Now add hover effects for each anchor tag list item by changing its background color. So when someone hovers over each list menu item it will change background color to a new color. I’m also adding CSS3 transformrotate effects of 3 degrees.

    1. .main-ul li:nth-child(1):hover a{
    2.      background-color: rgba(175,54,55,0.8);
    3.      -moz-transform: rotate(-3deg);
    4.      -webkit-transform: rotate(-3deg);
    5.      -o-transform: rotate(-3deg);
    6.      transform: rotate(-3deg);
    7. }

    Now repeat the above step for all list items with a new background color of your choice!

    1. .main-ul li:nth-child(2):hover a{
    2.      background-color: rgba(199, 204, 73, 0.8);
    3.      -moz-transform: rotate(-3deg);
    4.      -webkit-transform: rotate(-3deg);
    5.      -o-transform: rotate(-3deg);
    6.      transform: rotate(-3deg);
    7. }
    8. .main-ul li:nth-child(3):hover a{
    9.      background-color: rgba(213, 135, 11, 0.8);
    10.      -moz-transform: rotate(3deg);
    11.      -webkit-transform: rotate(3deg);
    12.      -o-transform: rotate(3deg);
    13.      transform: rotate(3deg);
    14. }
    15. .main-ul li:nth-child(4):hover a{
    16.      background-color: rgba(51, 143, 144, 0.8);
    17.      -moz-transform: rotate(3deg);
    18.      -webkit-transform: rotate(3deg);
    19.      -o-transform: rotate(3deg);
    20.      transform: rotate(3deg);
    21. }
    22. .main-ul li:nth-child(5):hover a{
    23.      background-color: rgba(117,18,98,0.8);
    24.      -moz-transform: rotate(-3deg);
    25.      -webkit-transform: rotate(-3deg);
    26.      -o-transform: rotate(-3deg);
    27.      transform: rotate(-3deg);
    28. }
    29. .main-ul li:nth-child(6):hover a{
    30.      background-color: rgba(33, 136, 215, 0.8);
    31.      -moz-transform: rotate(-3deg);
    32.      -webkit-transform: rotate(-3deg);
    33.      -o-transform: rotate(-3deg);
    34.      transform: rotate(-3deg);
    35. }
    36. .main-ul li:nth-child(7):hover a{
    37.      background-color: rgba(109, 109, 109, 0.8);
    38.      -moz-transform: rotate(3deg);
    39.      -webkit-transform: rotate(3deg);
    40.      -o-transform: rotate(3deg);
    41.      transform: rotate(3deg);
    42. }
    43. .main-ul li:nth-child(8):hover a{
    44.      background-color: rgba(152, 120, 92, 0.8);
    45.      -moz-transform: rotate(3deg);
    46.      -webkit-transform: rotate(3deg);
    47.      -o-transform: rotate(3deg);
    48.      transform: rotate(3deg);
    49. }

    That’s it, we have accomplished a simple flashy menu, so that when someone hovers over our menu items simultaneously it will change background color and slightly rotate. Thanks for reading!

  • Prashant Sutariya 8:17 am on February 14, 2012 Permalink | Reply  

    A Fast and Scalable NoSQL Database Service Designed for Internet Scale Applications 

    Jan 18, 2012 is a very exciting day as we release Amazon DynamoDB, a fast, highly reliable and cost-effective NoSQL database service designed for internet scale applications. DynamoDB is the result of 15 years of learning in the areas of large scale non-relational databases and cloud services. Several years ago we published a paper on the details of Amazon’s Dynamo technology, which was one of the first non-relational databases developed at Amazon. The original Dynamo design was based on a core set of strong distributed systems principles resulting in an ultra-scalable and highly reliable database system. Amazon DynamoDB, which is a new service, continues to build on these principles, and also builds on our years of experience with running non-relational databases and cloud services, such as Amazon SimpleDB and Amazon S3, at scale. It is very gratifying to see all of our learning and experience become available to our customers in the form of an easy-to-use managed service.

    Amazon DynamoDB is a fully managed NoSQL database service that provides fast performance at any scale. Today’s web-based applications often encounter database scaling challenges when faced with growth in users, traffic, and data. With Amazon DynamoDB, developers scaling cloud-based applications can start small with just the capacity they need and then increase the request capacity of a given table as their app grows in popularity. Their tables can also grow without limits as their users store increasing amounts of data. Behind the scenes, Amazon DynamoDB automatically spreads the data and traffic for a table over a sufficient number of servers to meet the request capacity specified by the customer. Amazon DynamoDB offers low, predictable latencies at any scale. Customers can typically achieve average service-side in the single-digit milliseconds. Amazon DynamoDB stores data on Solid State Drives (SSDs) and replicates it synchronously across multiple AWS Availability Zones in an AWS Region to provide built-in high availability and data durability.

    History of NoSQL at Amazon – Dynamo

    The ecommerce platform consists of hundreds of decoupled services developed and managed in a decentralized fashion. Each service encapsulates its own data and presents a hardened API for others to use. Most importantly, direct database access to the data from outside its respective service is not allowed. This architectural pattern was a response to the scaling challenges that had challenged through its first 5 years, when direct database access was one of the major bottlenecks in scaling and operating the business. While a service-oriented architecture addressed the problems of a centralized database architecture, each service was still using traditional data management systems. The growth of Amazon’s business meant that many of these services needed more scalable database solutions.

    In response, we began to develop a collection of storage and database technologies to address the demanding scalability and reliability requirements of the ecommerce platform. We had been pushing the scalability of commercially available technologies to their limits and finally reached a point where these third party technologies could no longer be used without significant risk. This was not our technology vendors’ fault; Amazon’s scaling needs were beyond the specs for their technologies and we were using them in ways that most of their customers were not. A number of outages at the height of the 2004 holiday shopping season can be traced back to scaling commercial technologies beyond their boundaries.

    Dynamo was born out of our need for a highly reliable, ultra-scalable key/value database. This non-relational, or NoSQL, database was targeted at use cases that were core to the Amazon ecommerce operation, such as the shopping cart and session service. Any downtime or performance degradation in these services has an immediate financial impact and their fault-tolerance and performance requirements for their data systems are very strict. These services also require the ability to scale infrastructure incrementally to accommodate growth in request rates or dataset sizes. Another important requirement for Dynamo was predictability. This is not just predictability of median performance and latency, but also at the end of the distribution (the 99.9th percentile), so we could provide acceptable performance for virtually every customer.

    To achieve all of these goals, we needed to do groundbreaking work. After the successful launch of the first Dynamo system, we documented our experiences in a paper so others could benefit from them. Since then, several Dynamo clones have been built and the Dynamo paper has been the basis for several other types of distributed databases. This demonstrates that Amazon is not the only company than needs better tools to meet their database needs.

    Lessons learned from Amazon’s Dynamo

    Dynamo has been in use by a number of core services in the ecommerce platform, and their engineers have been very satisfied by its performance and incremental scalability. However, we never saw much adoption beyond these core services. This was remarkable because although Dynamo was originally built to serve the needs of the shopping cart, its design and implementation were much broader and based on input from many other service architects. As we spoke to many senior engineers and service owners, we saw a clear pattern start to emerge in their explanations of why they didn’t adopt Dynamo more broadly: while Dynamo gave them a system that met their reliability, performance, and scalability needs, it did nothing to reduce the operational complexity of running large database systems. Since they were responsible for running their own Dynamo installations, they had to become experts on the various components running in multiple data centers. Also, they needed to make complex tradeoff decisions between consistency, performance, and reliability. This operational complexity was a barrier that kept them from adopting Dynamo.

    During this period, several other systems appeared in the Amazon ecosystem that did meet their requirements for simplified operational complexity, notably Amazon S3 and Amazon SimpleDB. These were built as managed web services that eliminated the operational complexity of managing systems while still providing extremely high durability. Amazon engineers preferred to use these services instead of managing their own databases like Dynamo, even though Dynamo’s functionality was better aligned with their applications’ needs.

    With Dynamo we had taken great care to build a system that met the requirements of our engineers. After evaluations, it was often obvious that Dynamo was ideal for many database use cases. But … we learned that engineers found the prospect of running a large software system daunting and instead looked for less ideal design alternatives that freed them from the burden of managing databases and allowed them to focus on their applications.

    It became obvious that developers strongly preferred simplicity to fine-grained control as they voted “with their feet” and adopted cloud-based AWS solutions, like Amazon S3 and Amazon SimpleDB, over Dynamo. Dynamo might have been the best technology in the world at the time but it was still software you had to run yourself. And nobody wanted to learn how to do that if they didn’t have to. Ultimately, developers wanted a service.

    History of NoSQL at Amazon – SimpleDB

    One of the cloud services Amazon developers preferred for their database needs was Amazon SimpleDB. In the 5 years that SimpleDB has been operational, we have learned a lot from its customers.

    First and foremost, we have learned that a database service that takes away the operational headache of managing distributed systems is extremely powerful. Customers like SimpleDB’s table interface and its flexible data model. Not having to update their schemas when their systems evolve makes life much easier. However, they most appreciate the fact that SimpleDB just works. It provides multi-data center replication, high availability, and offers rock-solid durability. And yet customers never need to worry about setting up, configuring, or patching their database.

    Second, most database workloads do not require the complex query and transaction capabilities of a full-blown relational database. A database service that only presents a table interface with a restricted query set is a very important building block for many developers.

    While SimpleDB has been successful and powers the applications of many customers, it has some limitations that customers have consistently asked us to address.

    Domain scaling limitations. SimpleDB requires customers to manage their datasets in containers called Domains, which have a finite capacity in terms of storage (10 GB) and request throughput. Although many customers worked around SimpleDB’s scaling limitations by partitioning their workloads over many Domains, this side of SimpleDB is certainly not simple. It also fails to meet the requirement of incremental scalability, something that is critical to many customers looking to adopt a NoSQL solution.

    Predictability of Performance. SimpleDB, in keeping with its goal to be simple, indexes all attributes for each item stored in a domain. While this simplifies the customer experience on schema design and provides query flexibility, it has a negative impact on the predictability of performance. For example, every database write needs to update not just the basic record, but also all attribute indices (regardless of whether the customer is using all the indices for querying). Similarly, since the Domain maintains a large number of indices, its working set does not always fit in memory. This impacts the predictability of a Domain’s read latency, particularly as dataset sizes grow.
    Consistency. SimpleDB’s original implementation had taken the “eventually consistent” approach to the extreme and presented customers with consistency windows that were up to a second in duration. This meant the system was not intuitive to use and developers used to a more traditional database solution had trouble adapting to it. The SimpleDB team eventually addressed this issue by enabling customers to specify whether a given read operation should be strongly or eventually consistent.

    Pricing complexity. SimpleDB introduced a very fine-grained pricing dimension called “Machine Hours.” Although most customers have eventually learned how to predict their costs, it was not really transparent or simple.

    Introducing DynamoDB

    As we thought about how to address the limitations of SimpleDB and provide 1) the most scalable NoSQL solution available and 2) predictable high performance, we realized our goals could not be met with the SimpleDB APIs. Some SimpleDB operations require that all data for a Domain is on a single server, which prevents us from providing the seamless scalability our customers are demanding. In addition, SimpleDB APIs assume all item attributes are automatically indexed, which limits performance.

    We concluded that an ideal solution would combine the best parts of the original Dynamo design (incremental scalability, predictable high performance) with the best parts of SimpleDB (ease of administration of a cloud service, consistency, and a table-based data model that is richer than a pure key-value store). These architectural discussions culminated in Amazon DynamoDB, a new NoSQL service that we are excited to release today.

    Amazon DynamoDB is based on the principles of Dynamo, a progenitor of NoSQL, and brings the power of the cloud to the NoSQL database world. It offers customers high-availability, reliability, and incremental scalability, with no limits on dataset size or request throughput for a given table. And it is fast – it runs on the latest in solid-state drive (SSD) technology and incorporates numerous other optimizations to deliver low latency at any scale.

    Amazon DynamoDB is the result of everything we’ve learned from building large-scale, non-relational databases for and building highly scalable and reliable cloud computing services at AWS. Amazon DynamoDB is a NoSQL database service that offers the following benefits:

    • Managed. DynamoDB frees developers from the headaches of provisioning hardware and software, setting up and configuring a distributed database cluster, and managing ongoing cluster operations. It handles all the complexities of scaling and partitions and re-partitions your data over more machine resources to meet your I/O performance requirements. It also automatically replicates your data across multiple Availability Zones (and automatically re-replicates in the case of disk or node failures) to meet stringent availability and durability requirements. From our experience of running, we know that manageability is a critical requirement. We have seen many job postings from companies using NoSQL products that are looking for NoSQL database engineers to help scale their installations. We know from our Amazon experiences that once these clusters start growing, managing them becomes the same nightmare that running large RDBMS installations was. Because Amazon DynamoDB is a managed service, you won’t need to hire experts to manage your NoSQL installation—your developers can do it themselves.
    • Scalable. Amazon DynamoDB is designed to scale the resources dedicated to a table to hundreds or even thousands of servers spread over multiple Availability Zones to meet your storage and throughput requirements. There are no pre-defined limits to the amount of data each table can store. Developers can store and retrieve any amount of data and DynamoDB will spread the data across more servers as the amount of data stored in your table grows.
    • Fast. Amazon DynamoDB provides high throughput at very low latency. It is also built on Solid State Drives to help optimize for high performance even at high scale. Moreover, by not indexing all attributes, the cost of read and write operations is low as write operations involve updating only the primary key index thereby reducing the latency of both read and write operations. An application running in EC2 will typically see average service-side latencies in the single-digit millisecond range for a 1KB object. Most importantly, DynamoDB latencies are predictable. Even as datasets grow, latencies remain stable due to the distributed nature of DynamoDB’s data placement and request routing algorithms.
    • Durable and Highly Available. Amazon DynamoDB replicates its data over at least 3 different data centers so that the system can continue to operate and serve data even under complex failure scenarios.
    • Flexible. Amazon DynamoDB is an extremely flexible system that does not force its users into a particular data model or a particular consistency model. DynamoDB tables do not have a fixed schema but instead allow each data item to have any number of attributes, including multi-valued attributes. Developers can optionally use stronger consistency models when accessing the database, trading off some performance and availability for a simpler model. They can also take advantage of the atomic increment/decrement functionality of DynamoDB for counters.
    • Low cost. Amazon DynamoDB’s pricing is simple and predictable: Storage is $1 per GB per month. Requests are priced based on how much capacity is reserved: $0.01 per hour for every 10 units of Write Capacity and $0.01 per hour for every 50 units of Read Capacity. A unit of Read (or Write) Capacity equals one read (or write) per second of capacity for items up to 1KB in size. If you use eventually consistent reads, you can achieve twice as many reads per second for a given amount of Read Capacity. Larger items will require additional throughput capacity.

    In the current release, customers will have the choice of using two types of keys for primary index querying: Simple Hash Keys and Composite Hash Key / Range Keys:

    Simple Hash Key gives DynamoDB the Distributed Hash Table abstraction. The key is hashed over the different partitions to optimize workload distribution. For more background on this please read the original Dynamo paper.

    Composite Hash Key with Range Key allows the developer to create a primary key that is the composite of two attributes, a “hash attribute” and a “range attribute.” When querying against a composite key, the hash attribute needs to be uniquely matched but a range operation can be specified for the range attribute: e.g. all orders from Werner in the past 24 hours, all log entries from server 16 with clients IP addresses on subnet

    Performance Predictability in DynamoDB

    In addition to taking the best ideas of Dynamo and SimpleDB, we have added new functionality to provide even greater performance predictability.

    Cloud-based systems have invented solutions to ensure fairness and present their customers with uniform performance, so that no burst load from any customer should adversely impact others. This is a great approach and makes for many happy customers, but often does not give a single customer the ability to ask for higher throughput if they need it.

    As satisfied as engineers can be with the simplicity of cloud-based solutions, they would love to specify the request throughput they need and let the system reconfigure itself to meet their requirements. Without this ability, engineers often have to carefully manage caching systems to ensure they can achieve low-latency and predictable performance as their workloads scale. This introduces complexity that takes away some of the simplicity of using cloud-based solutions.

    The number of applications that need this type of performance predictability is increasing: online gaming, social graphs applications, online advertising, and real-time analytics to name a few. AWS customers are building increasingly sophisticated applications that could benefit from a database that can give them fast, predictable performance that exactly matches their needs.

    Amazon DynamoDB’s answer to this problem is “Provisioned Throughput.” Customers can now specify the request throughput capacity they require for a given table. Behind the scenes, DynamoDB will allocate sufficient resources to the table to predictably achieve this throughput with low-latency performance. Throughput reservations are elastic, so customers can increase or decrease the throughput capacity of a table on-demand using the AWS Management Console or the DynamoDB APIs. CloudWatch metrics enable customers to make informed decisions about the right amount of throughput to dedicate to a particular table. Customers using the service tell us that it enables them to achieve the appropriate amount of control over scaling and performance while maintaining simplicity. Rather than adding server infrastructure and re-partitioning their data, they simply change a value in the management console and DynamoDB takes care of the rest.


    Amazon DynamoDB is designed to maintain predictably high performance and to be highly cost efficient for workloads of any scale, from the smallest to the largest internet-scale applications. You can get started with Amazon DynamoDB using a free tier that enables 40 million of requests per month free of charge. Additional request capacity is priced at cost-efficiently hourly rates as low as $.01 per hour for 10 units of Write Capacity or 50 strongly consistent units of Read Capacity (if you use eventually consistent reads you can get twice the throughput at the same cost, or the same read throughput at half the cost) Also, replicated solid state disk (SSD) storage is $1 per GB per month. Our low request pricing is designed to meet the needs of typical database workloads that perform large numbers of reads and writes against every GB of data stored.

    To learn more about Amazon DynamoDB its functionality, APIs, use cases, and service pricing, please visit the detail page at and also the Developer Guide. I am excited to see the years of experience with systems such as Amazon Dynamo result in an innovative database service that can be broadly used by all our customers.

  • Techmodi 7:59 am on January 10, 2012 Permalink | Reply
    Tags: , ,   

    TechModi made it in the top 15 providers 


    Techmodi made it at rank 11th into the top 11 Service Providers out of 29,187   registered providers on Elance across the Globe.

    Techmodi Rank 11th on Elance
  • Techmodi 12:10 pm on December 25, 2011 Permalink | Reply
    Tags: christams techmodi, christmas, festivals in india, ,   


    Christmas Greetings

    Christmas Greetings

    Christmas is an annual commemoration of the birth of Jesus Christ, celebrated generally on 25th December as a religious and cultural holiday by billions of people around the world.  In India also we celebrate Christmas to mark the birth of Jesus Christ, its not only celebrated by the Christians but by all the religions and across all the 28 states of India. At various places people celebrate it in masses to mark the birth of Christ.

    Techmodi also celebrates Christmas & the festive season with all its employees in office to ensure that there is fun at work. Techmodi Wishes all a Merry Christmas & Wish a Happy new year 2012.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc