Monday, December 17, 2012

SAP HANA with more cloud options

Since my last blog post on SAP's innovation spree about SAP HANA and then SAP HANA on AWS, SAP continues to further expand its strong application service portfolio by joining hands with more cloud vendors. After launching its HANA cloud services on Amazon Web Services (AWS), SAP now provides SAP HANA One Developer edition on KT ucloud, CloudShare and Portugal Telecom’s SmartCloudPT. And these service providers seems to be providing the SAP HANA as a Service (sounds interesting !!) even at lower price marks than AWS.

The detailed list of provided services and charges are available here:

But for a quick reference, a summary of cloud service offerings (taken from above mentioned link) are provided in below table:
Cloud PartnerDescriptionInstance Types       Price
Amazon Web Services

Amazon provides a secure, scalable, and highly elastic cloud infrastructure for running SAP HANA. With AWS there is no need to procure IT infrastructure weeks or months in advance, but it can be done within minutes. It can also be scaled based on workload or on-demand, and pay only for the resources used.
  • There is no SAP charge for the SAP HANA One developer edition license, only AWS account usual charges apply.
  • The AWS flexible and open platform supports a wide variety languages and operating systems so you can choose the development platform or programming model that best suits your needs.
  • Utilize other AWS cloud infrastructure services as well as a host of other third-party development tools already available on AWS.
  • AWS provides a massive global cloud infrastructure that allows you to quickly innovate, experiment and iterate.
  • 2 vCPU's, 17.1GB RAM 154GB disk
  • 4 vCPU's, 34.2GB RAM 154GB disk
  • 8 vCPU's, 68.4GB RAM 154GB disk
starting from $0.55 / hour

Get SAP HANA One, developer edition in the cloud – with CloudShare.
  • Flat $137 per month cost - no overages, no time limits
  • Includes SAP HANA Server and pre-configured developer environment (24 GB RAM). No IT, no VPN are required
  • 4 vCPU's, 22 GB, 150 GB disk
    Plus one remote desktop with Windows 7,
    1 vCPU, 2 GB, 20 GB disk (HANA studio, client, additional tools)
$137 / month

SAP HANA One, developer edition in SmartCloudPT provides simple and immediate access to a configured SAP HANA development environment, with community support from SAP, running on the enterprise-grade SmartCloudPT infrastructure.
The service fee is prepaid using a credit card and can be renewed at the end of each month. Fixed monthly pricing for the service starts at 159€/month for a 16GB RAM cloud instance. Unlike other public clouds, pricing is fixed per month and fully inclusive, with no hidden costs or surcharges! All environments run on the latest, enterprise-grade infrastructure and come with 24/7 instance availability.
  • 4 vCPU's, 16GB RAM, 200GB disk
  • 6 vCPU's, 32GB RAM, 300GB disk
starting from €159 / month
KT ucloud biz

Take a personal SAP HANA One, developer edition on kt ucloud within 5 minutes

  • Can be accessible all over the world, geographically very close to east and south east Asia
  • On-demand provisioning of HANA VM instance to the maximum size of 16vcore/128G
  • Both hourly and monthly payment can be available as requested
  • 4VM-clustered HANA landscape as well as 1VM (16vcore/120G) on one physical host is now available
  • 2 vCPU's, 16GB RAM, 90GB disk
  • 4 vCPU's, 32GB RAM, 140GB disk
  • 8 vCPU's, 64GB RAM, 230GB disk
  • 16 vCPU's, 128GB RAM, 420GB disk
  • 16 vCPU's, 120GB RAM, 420GB disk on dedicated server
starting from
$0.286 / hour

Also,  very good comparison of pricing between SAP HANA appliance vs SAP HANA service on AWS cloud is made by Bill Ramos in his interesting blog post here.
This comparison is based on the fact that SAP HANA One Developer edition is provided FREE of charge by SAP for 1 month trial basis. So there are no charges for using SAP HANA on cloud, only the standard cloud usage charges applies.

Wednesday, November 21, 2012


In previous few blogs, there are very quick introductions about the SAP, one of the world’s largest vendors of independent enterprise software, and its latest innovation marvel for enterprises, SAP HANA.
Now they have brought this effort one step more forward, by bringing SAP HANA on Cloud.
Yes, SAP's  in-memory database and platform capabilities are now available in form of a pay-as-you-go commodity by Amazon Web Services.

What it means: So now if some organization has need of fast processing power, but is hesitant in making huge investments in purchasing the entire in-house solution (SAP HANA appliance is really costly !!), then they can opt for this option and use it for the desired time. It like, just pay the rent of a Huge Villa, and enjoy living there for the time you want.

Advantages: Enterprises seeking some fast processing solutions for its regularly recurring (but not frequently  occurring ) tasks, like processing the payrolls for all employees at end of every month, or analyzing the sales trends to prepare for a specific event, can take advantages of this time based solution.

At first looks, this idea seems to be a fast, stable and reliable business solution, but things from far as just visible nice. There may be potential factors like massive data transfers, additional hidden charges/add-ons required, that might be involved. But all factors apart, This solution provides some promising future for the Software as a Service (SaaS) market.

The pricing details are available here.
For more in-depth technical details, visit SAP's website here.

Tuesday, July 24, 2012

Simple Architectures for Complex Enterprises: Web Short: The Mathematics of Cloud Optimization

Really nice blog: wanted to share with everyone.

Simple Architectures for Complex Enterprises: Web Short: The Mathematics of Cloud Optimization: How do you minimize the cost of running your large mission critical application on a public cloud? Do you focus on finding a low cost cloud or ...

Sunday, July 15, 2012

Issue/resolution for Google Play Store in Samsung Galaxy S Plus-GT-I9001

Presently, many users of Samsung Galaxy S Plus (including myself) reported a bug with their Google Market Place. The device can not update or download anything from Google Play Store. Its giving a downloading error-905 after completing 100% download, and the applications do not get installed or updated from the Google Play. Downloads and installs from other sources are working fine. (I tried installing true caller while this issue resided on my phone, it worked fine).

After some trials, i found some way to resolve this.

Here is what i did:

1) Go to "Application" (lower bottom right hand corner on the home screen) > Settings > Applications > Manage applications

2)Select the "All" tab

3) Scroll down and select Google Play Store

4) Select "Uninstall updates"

This will uninstall the Google Play Store, taking it back to older Market place.
After few minutes, it automatically gets updates, and resumes back to Google Play Store (yes, automatically).

But now when i download anything from Google Play, it works fine for me.
Please try this out and let me know if this works for you as well.

Friday, July 6, 2012

Microsoft Infrastructure Optimization

Along with the growth of any organization, its IT infrastructure also grows, becoming more complex, and difficult to maintain. In order to control and manage this growth, some guidelines or pathway is needed. “Infrastructure Optimization” (or IO) refers to the optimization of IT infrastructure of any organization to support and improve its business i.e. turning its IT Infrastructure from ’cost center’ to ‘business center’ for the organization.

In 2010, Microsoft had released details about 3 models as guidelines for Infrastructure Optimization of any organization:

1) Core Infrastructure Optimization Model

2) Business Productivity Infrastructure Optimization Model

3) Application Platform Optimization Model

Each of these models focuses on different set of IT attributes (called Capabilities) used within the organization, depending on level of use of IT made by the organization. These capabilities provide guidelines for scaling and optimizing the IT infrastructure of the organization. In any organization, all the three models will be applied simultaneously. Details of these models are provided in a section in this document.

Note:* The organization implementing the IO model should have a basic setup of IT department and good number of staff to maintain it. Also, the level of optimization achieved (basic, standardized, rationalized or dynamic) also depends on organization policies and convincing capabilities of salesperson.

IBM: IBM offers IBM Infrastructure Optimization Services with cloud computing, which focuses on addressing the testing of current infrastructure for efficiency and determining its capacity for change and responsiveness. Through its Testing Center of Excellence, test plans for introducing technologies such as service-oriented architectures can be made, and user can perform integration, performance and scalability testing. These services can help speed implementation and time to value with reduced risk, helping them build an infrastructure that is designed to Support business flexibility, improve resource utilization and thus help reduce overall IT costs and lower complexity.

These IO Models helps its customers understand, scale and gradually improve the current state of their IT infrastructures in terms of cost, security, risk, and operational agility. Dramatic cost savings can be realized by moving from an unmanaged environment (i.e. ‘Basic’ level) toward an environment of fully automated management and dynamic resource usage (i.e. ‘Dynamic’ level). Security improves from highly vulnerable in a Basic infrastructure to dynamically proactive in a more optimized infrastructure. IT infrastructure management changes from highly manual and reactive to highly automated and proactive. Thus it results in an overall benefit to the organization.
1) ‘Infrastructure Optimization at Microsoft’, available at .

2) ‘Infrastructure optimization services’, available at .

4) 'Offerings: Infrastructure Optimization' available at

Wednesday, May 2, 2012

Forefront Endpoint Protection 2010 is now System Center 2012 Endpoint Protection

Forefront Endpoint Protection 2010 is now System Center 2012 Endpoint Protection !!!

Forefront Endpoint Protection (FEP) 2010 was closely tied to System Center Configuration Manager (SCCM) through the infrastructure. Though was beneficial for organizations which had already adopted SCCM, but was a challenge for ones without it implemented.

Now in 2012, the Endpoint protection is a part of the System Center family and takes the name System Center Endpoint Protection (SCEP) 2012.

The new features of System Center Endpoint Protection 2012 includes:

  • Support for System Center 2012 Configuration Manager RC, including integrated setup, management, and reporting
  • Role-based management across security and operations
  • Improved alerting and reporting, with real-time and user-centric data views
  • More efficient delivery of signature updates using new automatic software deployment model 

 The integration of SCEP with SCCM allows central management of all endpoints. Now users can scan the endpoints for both updates and viruses via a single product family (System Center). 

When this was released, there was simply word to express the reaction.....WOW !!!

Monday, April 30, 2012

Good articles on Qlik View

Good Articles on QlikView
Microsoft's new PowerPivot from a QlikView standpoint (bit old, but interesting content, specially the comments below the article)
A Microsoft guy does QlikView (QlikView from Microsoft’s guys’ perspective).
In-memory BI is not the future. It’s the past. (comparison between in-memory and traditional disk-based technology).
The success of QlikTech and the relentless activities of Microsoft’s marketing machine have managed to confuse many in terms of what role in-memory technology plays in BI implementations. And that is why many of the articles out there, which are written by marketers or market analysts who are not proficient in the internal workings of database technology (and assume their readers aren’t either), are usually filled with inaccuracies and, in many cases, pure nonsense.”

Tuesday, March 13, 2012

Microsoft’s High Availability solutions

A high-availability solution masks the effects of a hardware or software failure and maintains the availability of applications so that the perceived downtime for users is minimized. The basics of high availability are already discussed in an earlier blog. This blog focuses on the various high availability solutions provided by Microsoft. The high availability solutions provided by Microsoft are as follows:

Failover Cluster

A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. All nodes in this group are managed as a single system and share a common namespace. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). With this, users experience a minimum of disruptions in service.

More details:

Database mirroring

Database mirroring maintains two copies of a single database that must reside on different server instances of SQL Server Database Engine. Typically, these server instances reside on computers in different locations. One server instance serves the database to clients (the principal server). The other instance acts as a hot or warm standby server (the mirror server), depending on the configuration and state of the mirroring session. When a database mirroring session is synchronized, database mirroring provides a hot standby server that supports rapid failover without a loss of data from committed transactions. When the session is not synchronized, the mirror server is typically available as a warm standby server (with possible data loss). Database mirroring is implemented on a per-database basis and works only with databases that use the full recovery model.

More details:

Log Shipping

Log shipping mechanism is based on replicating the changes made in primary database to the secondary (backup) databases by refereeing the transaction logs generated from the primary database. Log Shipping configuration automatically send transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually. An optional third server instance, known as the monitor server, records the history and status of backup and restores operations and, optionally, raises alerts if these operations fail to occur as scheduled.

More details:

Data Replication:

Data Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. Using replication, one can distribute data to various locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.

Database replication can be done in at least three different ways:

  • Snapshot replication: Data on one server is simply copied to another server, or to another database on the same server.
  • Merging replication: Data from two or more databases is combined into a single database.
  • Transactional replication: Users receive full initial copies of the database and then receive periodic updates as data changes.

More details:


AlwaysOn, a new high availability solution available with SQL Server 2012, is a combined form of the Microsoft’s existing High-Availability (HA) and Disaster Recovery (DR) functionalities like database mirroring, failover-clustering and log shipping. This integration makes them work better together and removes a lot of customer-required set up and tuning, thus helping eliminate potential errors. An AlwaysOn solution can leverage two major SQL Server 2012 features for configuring availability at both the database and the instance level:

  • AlwaysOn Availability Groups, new in SQL Server 2012, greatly enhance the capabilities of database mirroring and helps ensure availability of application databases, and they enable zero data loss through log-based data movement for data protection without shared disks. Availability groups provide an integrated set of options including automatic and manual failover of a logical group of databases, support for up to four secondary replicas, fast application failover, and automatic page repair.
  • AlwaysOn Failover Cluster Instances (FCIs) enhance the SQL Server failover clustering feature and support multisite clustering across subnets, which enables cross-data-center failover of SQL Server instances. Faster and more predictable instance failover is another key benefit that enables faster application recovery.

More details:

There is some basic difference between the scenarios for implementation for each of these technologies, which will be covered in a further blog.


Tuesday, March 6, 2012

SAP selects IBM DB2 as a strategic database platform

Very recently, SAP AG adopted IBM DB2 as the database of choice for reducing the complexity and operational costs of its IT landscape, particularly Human Capital Management (HCM), ERP and Business Intelligence applications. The case study outlines how SAP saw major benefits, including:

- Improved response times and efficiency

- High user productivity


The Challenge

SAP IT (the IT department of SAP AG) wanted to be able to take advantage of new SAP software functionalities while reducing the complexity and operational costs of its IT landscape. The company also wanted to move to a new database platform to deliver optimal performance.

The Solution

In three separate projects, SAP IT upgraded its Human Capital Management (HCM), ERP and Business Intelligence applications, simultaneously performing Unicode conversion and migrating databases from Oracle to IBM DB2.


“The efficiency of the new IBM DB2 solution has given us headroom within our database and storage servers to grow as business workload rises, with high user productivity and great return on investment.”

~Peter Boegler, Solution Architect, SAP IT

Another quotable quote:

In addition to the Unicode conversion, we also had a corporate objective to migrate away from our existing Oracle database and onto IBM DB2, which is now the recommended database for SAP software,” says Peter Boegler.

(Real time Case Study)

Thursday, March 1, 2012

SAP HANA is real real-time technology

Why is SAP HANA so fast?

In recent time, SAP released its incredible innovative idea of In-Memory Computing for providing real-real time computing, along with an appliance SAP HANA (High Performance Analytical Appliance). It is claimed to be a game changing innovation, which has forced other vendors to take a step forward and jump into this dimension of technology.
As claimed by SAP, the HANA appliance achieved outstanding statistical figures:
  • Ability to scan 2 million records per millisecond per core and over 10 million complex aggregations calculated on the fly per second per core.
  • Full parallelization at 1,000 cores and beyond
  • 450 billion record system implemented on less than three terabytes of physical memory.

An explainination of this outstanding performance, as given by John Appleby, Head of Business Analytics & Technology, Bluefin Solutions stands as:

"Regular RDBMS technologies put the information on spinning plates of iron (hard disks) from which the information is retrieved. HANA stores information in electronic memory, which is some 50x faster (depending on how you calculate). HANA stores a copy on magnetic disk, in case of power failure or the like. In addition, most SAP systems have the database on one system and a calculation engine on another, and they pass information between them. With HANA, this all happens within the same machine."

(content taken from publicly available information available here)

This really shows a good potential of a game changing technology breakthrough, forcing vendors to work on the In-memory concept. And if this continues in the same fashion, one day, everyone will be having a super computer (as we know them today) into the pockets of common people.

Monday, February 27, 2012

A quick brief about SAP

SAP is one of the world’s largest vendors of independent enterprise software, that provides specialized software applications for enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM), product lifecycle management (PLM), software supplier relationship management (SRM), and business intelligence (BI).

SAP provides enterprise software solutions (like SAP R/3) to a large number of multinational companies who require fast, stable and reliable business applications. These applications (categorized as platform applications, extension applications and composite applications) enable them to handle millions of transactions in multiple currencies and possess international presence. These applications are available in form of smaller components, which can be customized and configured as per the needs of the industry.

In case there is any need of further modification or functionality enhancement, these applications can be customized using the SAP Netweaver development technologies. SAP Netweaver development technologies includes ABAP (programming language along with an integrated development environment, something similar to COBOL), Java, and Composition development framework (tools and environment across heterogeneous systems).

Wednesday, February 22, 2012

High Availability Basic Concepts-I

For any software application or service, high availability refers to the availability of that application or service to its users without any failure. For simplest example, Google is providing its search capabilities to all the Internet users (virtually) for 24 X 7 via its Google search engine. We assume that as soon as we switch on our PC or laptop (or any other compatible device), and connect to Internet, Google search will be available to us. The use of word virtually compensates for those small time periods, when Google search engine is not available to the users due to server maintenance, or some other reasons. This duration will be called as downtime, and is usually measured over a year. So if any application of service provider claims 99.9% availability, it means that over a year’s time, its services may be down for 0.1% duration of year, i.e. 8 hours and 45 minutes.

The primary goal of any high availability solution is to minimize the impact of downtime. And the Service Level Agreements for any such high availability solution (or service) always covers these clarifications in its terms and conditions. The availability of a solution (application or service, or a group of them) can be expressed as this calculation

Availability = ( Actual Uptime / Expected Uptime ) x 100:

The resulting value is often expressed by industry in terms of the number of 9’s that the solution provides; meant to convey an annual number of minutes of possible uptime, or conversely, minutes of downtime.

Number of 9’s

Availability Percentage

Total Annual Downtime



3 days, 15 hours



8 hours, 45 minutes



52 minutes, 34 seconds



5 minutes, 15 seconds

More details on various high availability applications and services will be covered very soon. (as soon as I'll have free time available for the same :)

Alright, got some time to generate some content as a blog Microsoft's High Availability Solutions.

Wednesday, February 15, 2012

Journey of Crystal Reports

For one of the recent assignments, I had to dig out the entire history of the world famous reporting product “Crystal Reports”, as it originated, transformed, and evolved into what we see as SAP Crystal Reports 2011 today. Here is the brief summary of the same:

Crystal Reports is a business intelligence application used to design and generate reports from a wide range of data sources. Started as Crystal Services Inc. in 1989, the company developed this product as a commercial report writing tool for their accounting software. They released three initial versions of the product as Quik Reports (1990), Quik Reports 2.0 (1991) and Quik Reports 3.0 (1992).

Then, after acquisition by Seagate Technology Inc in 1994, the company was named as Seagate Software. The product was also renamed and launched as Seagate Info 4.0 in 1995. In 1995, Seagate Software decided to have collaboration with Holistic Systems (acquired by Seagate technologies Inc.) forming Information Management Group of Seagate Software. Under this collaboration, the product was rebranded, and users enjoyed 5 versions, namely Crystal Reports 4.5 (1996), Crystal Info 5 (1997), Seagate Crystal Info 6 (1998), Seagate Info 7 (1999), and Seagate Info 7.5 (2000). (And yes, the name was changed almost after every release; they were just not able to find the right name for it!!).

In year 2001, the company Seagate Software was again renamed as Crystal Decisions. It then released the versions Crystal Enterprises 8.0, Crystal Enterprises 8.5 (2001) and then Crystal Enterprises 9.0 (2001) in quick successions.

Then it was again acquired by famous Business Intelligence solution provider BusinessObjects in 2003. The first version released after this acquisition carried the same naming format as its earlier trend, Crystal Enterprise 10.0 in year 2004. Then the product was released with revised names as Crystal Reports XI R1 (2005), and Crystal Reports XI R2 (2006).

With acquisition of BusinessObjects by SAP in 2007, this product again witnessed change of name. It was released as Crystal Report 2008 (2008).

The latest release is named as SAP Crystal Report 2011.

For details on future roadmap of this product, and what users can expect from SAP for this product, here is the SAP Crystal Reports 2011 and the 20-year roadmap.

Friday, February 10, 2012

Business Intelligence Technology Stack

Business Intelligence general refers to identification, extraction or transformation of business data into useful information (reports, charts, graphs etc.) to gain business specific insights like demand forecasts and sales predictions, thus providing better decision making capabilities. It usually refers to the computer based techniques, like reporting, analytics, data mining, benchmarking, predictive analysis etc., but is not limited to them.

As explained by D. J. Power in his work “A Brief History of Decision Support Systems”, there are various tools and technologies that provide Business Intelligence capabilities, and providing an efficient Decision Support System (DSS). His research covers even the basic systems like file drawers, which are used to keep information in organized and readily searchable manner (for small organizations). But in present information age, those kinds of systems seems outdated for requirements of a global organizations, with hundreds of branches across the world, and

generating and processing huge amount of information per hour. In this article, we are focusing only on computer based programs and applications, that consumes and processes the digital information available on organization’s servers, and then generates meaningful results out of it, which provokes better decisions from BDMs (Business Decision Makers), TDMs (Technology Decision Makers) or other IT Pros involved in decision making.

The well-known enterprise analyst organization Gartner predicts a five-fold growth in the Open-Source BI tools product deployment by the end of 2012. They also mentioned in their report on Magic Quadrant for Business Intelligence Platforms 2011, that the growth in BI will be driven by factors like Consumerization of BI and support for extreme data performance with emerging data sources (known as Big Data). And with some recent break-through innovations by the major BI vendors (like SAP’s HANA appliance, Oracle’s Exalytics appliances,

and Microsoft’s BISM model), IT world may expect more surprises coming from the major BI vendors (including but not limited to Microsoft, Oracle, microstrategy, IBM, Information Builders, QlikTech, SAP and SAS).

But irrespective of vendor, all BI solutions have a generic technology stack, with following layers:

· User Interface: This includes the Web based or application based frontend that brings the analysis to the users. It includes the portals (in case of networked or web-based analytics) or the application front end in case of locally deployed BI solution.

· Development and Admin Tools: This comprises of the tools, languages and processes involved in the development and management of BI applications and systems. The difference between BI systems and BI solutions will be covered in another blog. For example, some BI development languages can be MultiDimensional eXpressions (MDX), XML for Analysis (XMLA), Data Mining Extensions (DMX) etc.

· BI Tools: This comprises of the tools (reports, dashboards or otherwise) that enables the users to perform the desired analysis on the underlying data. User access these tools via the User Interface layer discussed above. For instance, Microsoft’s PowerPivot and Power View, SAP’s crystal reports, Jaspersoft, Oracle’s Business Intelligence Foundation Suite are just few examples to name, there are more than 100 of readily usable vendor products available in the market.

· Applications and BI data sources: This comprises of the various sources that keep the information in pre-processed form that can be readily consumed for analysis. This includes models like Online Analytical Processing (OLAP) Cubes or Decision Support Systems, and concepts like Data Mining, Analysis Services, etc.

· Data Integration Tools: This comprises of the various data management tools and concepts like Master Data Management (covering data collection, source identification, schema mapping, normalization, data transformation, rule administration, error detection and correction, data consolidation, data storage, data distribution, data classification, item master creation, data enrichment and data governance) and services like taxonomy services, Data Quality Services,

· Data warehouse platform: This comprises of various data sources, including simple text based files, excel sheets, relational databases, or even complex unstructured data types like audio files, videos, web-logs, click-streams and geo-spatial data etc.

Wednesday, January 25, 2012

Brief history of Microsoft SQL Server

Just got curious about the entire history of how SQL server evolved since its birth. Here is a short blog reflecting the research.

For an interesting story on how SQL server evolved, please refer the document History of SQL Server.

A brief history of SQL Server is available in the table below (along with the relevant links to corresponding resources):


SQL Server Version

Code Name


SQL Server 2012



SQL Server 2008 R2

Kilimanjaro (aka KJ)


SQL Azure

Matrix (aka CloudDB)


SQL Server 2008



SQL Server Integration Services (formerly Data Transformation Services)


SQL Server 2005



SQL Server 2000 Reporting Service


SQL Server 2000 (64-bit Edition)



SQL Server 2000 Analysis Services



SQL Server 2000


SQL Server 7.0 OLAP Services (including Data Transformation Services)



SQL Server 7.0



SQL Server 6.5



SQL Server 6.0



SQL Server 4.2 (32 bit Edition)



SQL Server 1.1


SQL Server 1.0 (16bit)

Ashton-Tate/Microsoft SQL Server

Also details of some of the specific release dates and build numbers are available on MSDN link.

Some guidelines on upgrade paths (till SQL Server 2008 R2) are available on MSDN here.

More technical details about each version is available here.

Total Pageviews