Saturday, 30 January 2016

Union Node Pruning in Modeling with Calculation View

Data Modeling in SAP HANA using calculation view is  one of the supreme capabilities provided for end user to mold his raw data in to a well structured result set by leveraging multiple operational capabilities exposed by calculation view. On the way of achieving this we need to think on different aspects involved.
Let me take few lines here to quote some of the real world examples to provide better clarity on my talk.
We all know that there are two major parameters which we generally take in to consideration when we are qualifying or defining the standard of any automobile, which are nothing but 'Horse Power(HP)' and 'Mileage' of the automobile. There is always a trade off between the two, by which i mean that a higher HP automobile yields  reduced mileage and vice versa. Why does this happen? It is because we are making the underlying mechanical engines  to generate more HP and thus consuming most of the source energy(fuel) for this purpose.

Friday, 29 January 2016

SuccessFactors Adapter in SAP HANA Cloud Integration (SAP HCI)


With more customers moving towards a cloud based IT investment strategy for their HCM solution the need to integrate with their existing OnPremise setup and other 3rd party systems is on the rise. Large companies generally move towards a Cloud HCM investment like SuccessFactors in a phased manner. The phased approach is generally in two dimensions the first one being the solution dimension where only certain processes are first moved to the cloud example Performance Management or Compensation Management or Recruiting and later on the other core processes of the company follow. The second dimension is location wise where HCM Business processes in select set of locations are moved first before the rest of the larger regions follow.

What this results in; is the requirement to keep all of the systems in sync and ensure the processes cross interacts in a smooth manner. A few examples of this setup that leads to an integration requirement in SuccessFactors could be the follows:

Wednesday, 27 January 2016

New SAP HANA Bundle Streamlines Business Processes

SAP has combined several pieces of software from its SAP HANA platform into a new software bundle aimed at improving big data use and analysis for businesses. The announcement of this Intelligent Business Operations bundle was made at the Gartner Business Process Management Summit in London last week.

"The full value of Big Data only comes when you embed its insights in your business processes," says Sanjay Chikarmane, senior vice president and general manager of Global Technology Solutions at SAP. "With the new intelligent business operations offering, we aim to help organizations to make use of Big Data in real time to run their processes more efficiently and intelligently."

Tuesday, 26 January 2016

Implement and consume your first ABAP Managed Database Procedure on HANA

  • SAP NetWeaver AS ABAP 7.4 Support Package 5 (or higher) running on SAP HANA
  • SAP HANA Appliance Software SPS 05 (or higher)
  • SAP HANA DB SQLScript V2.0 (or higher)
  • ABAP Development Tools for SAP NetWeaver (version 2.19) 
Tutorial Objectives
After completing this tutorial, you will be able to:
  • Declare an AMDP class
  • Declare an AMDP method
  • Implement an AMDP method
  • Consume an AMDP method in ABAP 

Monday, 25 January 2016

Mandatory Steps to Adapt ABAP Code for SAP HANA

In order for your ABAP code to work with SAP NetWeaver using SAP HANA as the database, you will need to validate if the code is truly database independent and is not reliant on unique behavior of a specific database.

Adapting for SAP HANA means make your ABAP code database independent

What does it mean for ABAP code to be database independent? Let me illustrate this with an example. With some databases the use of SELECT statement without ORDER BY will result in the records returned in the order of the index used to retrieve them from the database. This result is neither a feature of standard SQL nor of the specific database. It seems it is just luck the order used by the database is also the order you needed for retrieving the records. If someone then creates a different index or deletes the index used, then the database will return the records in a different order. If you move to a different database such as SAP HANA, the records returned may be in a different order then the order returned in the previous database. Standard SQL code without ORDER BY does not specify the order in which records are returned.

Thursday, 21 January 2016

SAP BW Extractor ( DataSource) based on HANA Model

In this blog, I would discuss how can we load data from HANA models/Database Procedure  to BW DataSource based on Function Module.

Scenario 1 : 
You have a HANA model which gives you  every day snapshot of open order / delivered order in real time . You want to store the data somewhere  to see trend over time.  Reporting security is implemented in BW side and you want to reuse that .You also want to use Master Data / Text available in BW.

Scenario 2: 
You have BW on HANA and HANA native in same database , and there is hybrid data model which uses data from both BW Schema and HANA native schema.  If the tables are small , you might pull the data by  some way and build your model in BW . However, if the tables are big and requirement is not straight forward, a calculation view can be handy and comes with great performance benefit by using Input Parameter.  We can also model very complex requirement by using  stored  procedure or Script Based Calcuation view.  But , for some reasons we want data to be persisted in BW , like business wants Key and Text side by side for variable help values which still does not work well in native HANA.

Wednesday, 20 January 2016

HANA password security

While creating the new user in HANA studio, we have three types of Authentication.
  • Password
  • Kerberos (Third-party authentication provider)
  • SAML (Security Assertion Markup Language)
                                              Exhibit: 1                                               

Every database user is identified with in the database by Authentication based on username and password.
In this document we will be concentrating on password and its policy parameters.

Passwords are subjected to security rules and are configured using the parameters in system properties files indexserver.ini. To have a look, let us open the "Administration Console" perspective -> Configuration tab -> Expand Indexserver.ini -> Expand password policy and you find 11 parameters in it.

Tuesday, 19 January 2016

Lenovo build new VMI and CFE system on SAP HANA XS for x86 Business Line

Lenovo (HKSE: 992) (ADR: LNVGY) is a $ 46 billion US dollars of the “Fortune” Global 500 company, is a global leader in consumer, commercial and enterprise innovation and technology, In Jan 2014, Lenovo agreed to acquire the mobile phone handset maker Motorola Mobility from Google, and July 2014, acquire whole X86 server business from IBM.

Why Lenovo need this project?

Then, CIFUIS approved Lenovo’s X86 acquisition proposal from IBM on the condition that all the X86 servers sold in U.S have to be exclusively built in Lenovo owned plants located in Mexico and U.S. this strict regulation requirement brought about the Bohemia Project (core parts are new APO、new VMI and new CFE system), which is the key point on maintaining and expanding Lenovo business in North America.
Bohemia Project is designed to enable X86 manufacturing capability and build E2E supply chain IT platform in USFC and Monterrey plants (existing PC plants in North Carolina, US and Monterrey, Mexico, Shenzhen). Hence to realize 2 main goals which are: meeting CFIUS’s strict regulation and decreasing the cost per piece.

Monday, 18 January 2016

DB Refresh of HANA Based systems using HANA Studio Backup/Recovery without SWPM

DB Refresh using HANA Studio DB Restore  with SID Modification -  NON SAPinst

HANA Studios DB restore option and hdbuserstore there after makes the whole process of a  System copy a lot simpler . The only limitation would be that the target schema would still remain as source.

The idea of this blog is to explain the process of performing  a simple homogeneous system copy of a HANA based system using the recovery of the source backup on the target HANA DB using HANA Studio.

In this case ,we will be  considering a situation of copying a Prod backup and refreshing it in a already running quality system

These are the salient steps involved in achieving this

Step 1 ) Take a complete DB backup of the source Database
Step 2) Move the database backup to the Target system
Step 3) Using HANA Studio recover the source db in the target HANA box
Step 4) Supply the license text so that the HANA Studio can automatically apply the license after install
Step 5) Modification of Schema access at the target DB
Step 6) Modify the default key  using the hdbuserstore on the application servers  and DB instance.
Step 7 ) Start sap
Step 8 ) Post activities.

SAP HANA: Using Hierarchies

In SAP HANA, we have a choice of creating 2 types of hierarchies:
  1. Level Hierarchy
  2. Parent Child Hierarchy 
Level Hierarchy:
     Each level represents a position in the hierarchy. For example, a time dimension might have a hierarchy that represents data at the month, quarter, and year levels.Each level above the base (or most detailed) level contains aggregate values for the levels below it. The members at different levels have a one-to-many parent-child relation. For example, Q1-05 and Q2-05 are the children of 2005, thus 2005 is the parent of Q1-05 and Q2-05.
Hierarchies and levels have a many-to-many relationship. A hierarchy typically contains several levels, and a single level can be included in more than one hierarchy.

In our example, let us take “Level Hierarchy”.

Friday, 15 January 2016

HANA Rules Framework

Welcome to the SAP HANA Rules Framework (HRF) Community Site!

SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.
In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.

HRF Main Objectives are:
  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

Thursday, 14 January 2016

Importing Database Systems in Bulk into SAP DB Control Center

Registering a large number of database systems to monitor one by one can become a time consuming process. Instead of adding one system at a time, you can add multiple systems at once using an import file.

Note: This document was originally created by a former colleague of mine, Yuki Ji, in support of a customer engagement initiative regarding the SAP DB Control Center (DCC) product.  To preserve this knowledge, the document was migrated to this space.

1. Configure Monitored Systems - Make sure to setup the technical user for each system you wish to configure. You will need the credentials of the technical user for import registration.
2. Create an Import File - Create an import file according to the format indicated in the link. Save the import file as either a .csv or a .txt file.

Wednesday, 13 January 2016

SLT: Mass Tables for Replication to HANA

When using SLT to replicate tables from a source system to HANA, usually tables are entered for replication by using the HANA Studio.  For example, first select “Data Provisioning” from the Modeler perspective in the Studio:


Ensure the proper source and HANA systems are chosen, and select “Replicate”:

SAP HANA Certifications

Submit Your Product Enhancement Ideas via SAP HANA Idea Place

It happens all the time - you're using a product and an idea which has the potential to make that product better comes to mind. Wouldn't it be great if you could submit your product enhancement ideas directly to the product team and potentially having it added to a future release?

SAP HANA on Idea Place gives you this opportunity. At this user community site you can submit product enhancement ideas or vote on ideas from others. SAP HANA product team will consolidate and track all ideas, but the most popular ideas will be directly reviewed by the SAP HANA Product Management and Develpment team on a regular basis. When accepted, an idea will be considered for inclusion in future releases.

A place to share your ideas with the SAP HANA product team

Continously listening to, co-innovating with, and learning from customers is crucial for the success of SAP HANA. Product enhancement ideas are one way we listen to and co-innovate with customers and partners. But how are enhancement requests chosen and will an idea become part of the product one day? And what are the most wanted product enhancements being at the list for many customers?

Any customer or partner can submit ideas, comment, and vote. We will accept product ideas based on the value they deliver to our customers and our product strategy. We prioritize reviews of popular ideas because the number of votes an idea receives represents its value for our customers. The voting on ideas allows us to understand which ideas are of the highest interest to the most users. It gives every user a voice.

Tuesday, 12 January 2016

New Statistics Server Implementation

What is the Statistics Server?

The statistics server assists customers by monitoring their SAP HANA system, collecting historical performance data and warning them of system alerts (such as resource exhaustion). The historical data is stored in the _SYS_STATISTICS schema; for more information on these tables, please view the statistical views reference page on

What is the NEW Statistics Server?

The new Statistics Server is also known as the embedded Statistics Server or Statistics Service. Prior to SP7 the Statistics Server was a separate server process - like an extra Index Server with monitoring services on top of it. The new Statistics Server is now embedded in the Index Server. The advantage of this is to simplify the SAP HANA architecture and assist us in avoiding out of memory issues of the Statistics Server, as it was defaulted to use only 5% of the total memory.

In SP7 and SP8 the old Statistics Server is still implemented and shipped to customers, but can migrate to the new statistics service if they would like by following SAP note 1917938.

How to Implement the New Statistics Server?

The following screen caps will show how to implement the new Statistics Server. I also make note of what your system looks like before and after you perform this implementation (the steps to perform the migration are listed in SAP note 1917938 as well).

Monday, 11 January 2016

Developer’s Journal: HANA Catalog Access from ABAP


I introduced the topic of ABAP Secondary Database Connection and the various options for using this technology to access information in a HANA database from ABAP. Remember there are two scenarios where ABAP Secondary Database Connection might be used.  One is when you have data being replicated from an ABAP based application to HANA. In this case the ABAP Data Dictionary already contains the definitions of the tables which you access with SQL statements.

The other option involves using HANA to store data gathered via other means.  Maybe the HANA database is used as the primary persistence for completely new data models.  Or it could be that you just want to leverage HANA specific views or other modeled artifacts upon ABAP replicated data.  In either of these scenarios, the ABAP Data Dictionary won’t have a copy of the objects which you are accessing via the Secondary Database Connection. Without the support of the Data Dictionary, how can we define ABAP internal tables which are ready to receive the result sets from queries against such objects?

In this blog, I want to discuss the HANA specific techniques for reading the Catalog and also how the ABDC classes could be used to build a dynamic internal table which matches a HANA table or view. The complete source code discussed in this blog can be downloaded from the SCN Code Exchange.

Thursday, 7 January 2016

"Central Table Error" and its Solution


Issues encountered while creating Analytical views.
Let me explain this in an elaborated form:

Error Message:- “Central table not unique. Attributes defined for different tables.”

CASE 1:- when we take a counter.

Problem Description:-
When we create an analytical view with tables and attribute views with a counter for any field. As Counter is itself a measure, we don’t need to define any other measure. When we validate and activate it. We will get the error as shown below:

SAP HANA Certifications, SAP HANA Material

Steps to recreate the error:

1.  Create an analytical view.


Introducing the SAP Automated Predictive Library

You may have already heard about the recent release of SAP Predictive Analytics 2.0, but may not be aware that this also includes the SAP Automated Predictive Library (APL) for SAP HANA.

The APL is effectively the SAP InfiniteInsight (formerly KXEN) predictive logic optimized and adapted to execute inside the SAP HANA database itself for maximum performance - just like the SAP HANA Predictive Analysis Library (PAL) and Business Function Library (BFL).

Obviously when you already have data in SAP HANA it makes sense to perform heavy-duty processing such as data mining as close as possible to where the data resides - and this is exactly what the APL provides.

By way of comparison, the PAL provides a suite of predictive algorithms that you can call at will - as long as you know which algorithm you need, whereas the APL focuses on automation of the predictive process and uses it's own in built intelligence to identify the most appropriate algorithm for a given scenario. So the two are very much complementary.

Tuesday, 5 January 2016

Scheduling a job in SAP HANA using HDBSQL and windows task scheduler


SAP HANA as we know is the talk of the town and with its humongous capabilities has made a huge impact in the market. Since SAP HANA is still at its nascent stage it throws as many challenges. This document discusses about two such challenges faced and suggests a workaround for them.

The Scenario:

The requirement was to generate a file with pay roll data on last day of every month using the following data.
The Master data for the entire scenario included the Employee master data (Employee number, name and salary), Store master data (Store number, Area/Locality, Sales Manager). The daily transactions of the stores are stored in a separate table. There is also a table which shows the Monthly Target fixed for each employee.

Let us consider that the following are the data in the tables

Employee Master Table:

SAP HANA, SAP HANA Certifications

Extending HANA Live Views

In this blog I will describe how we can extend HANA Live Views

I hope many of you are familiar with HANA Live.
If you are not familiar with HANA Live, then you can refer the below blog:
SAP HANA Live - Real-Time operational reporting | SAP HANA

To extend HANA Live Views we generally make a copy of it and then make changes to it as per our needs.
You can check the document given in the blog mentioned above on how to extend HANA Live Views.

Now SAP has created a new tool called SAP HANA Live Extension Assistant.
Using this tool we can easily extend Reuse Views and Query Views.

Lets start with the Installation of Extension Tool:
First download HANA Content Tools from Service Marketplace and then import Delivery Unit HCOHBATEXTN.tgz

SAP HANA, SAP Certifications

Saturday, 2 January 2016

Smart Data Access - Basic Setup and Known Issues


This blog will focus on basic setup of Smart Data Access (SDA) and then outline some problems that customers have encountered.  Some of the issues outlined in the troubleshooting section come directly from incidents that were created.

There is already a lot of information on Smart Data Access which this blog does not aim to replace.  Throughout the blog, I will reference links to other documentation that can cover the topics in more detail.

What is Smart Data Access (SDA)?

SDA allows customers to access data virtually from remote sources such as Hadoop, Oracle, Teradata, SQL Server, and more. Once a remote
connection to a data source is done we can virtually connect to the tables and query against is or use in data models as if it were data that resides in a SAP
HANA database.

This allows it so customers do not have to migrate or copy their data from other databases into a SAP HANA database.

How to Setup SDA?

Smart Data Access was introduced in SAP HANA SP6, so if you intend on using SDA be on at least this revision.

Prior to connecting to a remote database you will need to configure an ODBC connection from the server to the remote database. For
assistance on how to install the Database drivers on how to install the database drivers for SAP HANA Smart Data Access please refer to SAP note
1868702 and refer to the SAP HANA Learning  Academy videos

SAP HANA Dynamic Tiering Setup

What is Dynamic Tiering?

The SAP HANA dynamic tiering option is a native big data solution for SAP HANA. The dynamic tiering option adds smart, disk-based extended storage to your SAP HANA database. Dynamic tiering enhances SAP HANA with large volume, warm data management capability.

The dynamic tiering option adds the extended storage service to your SAP HANA system. You use the extended storage service to create the extended storage store and extended tables. Extended tables behave like all other HANA tables, but their data resides in the disk-based extended storage store.

Your application automatically determines which tier to save data to: the SAP HANA in-memory store (the hot store), or extended storage (the warm store). When you use dynamic tiering to place hot data in SAP HANA in-memory tables, and warm data in extended tables, highest value data remains in memory, and cooler less-valuable data is saved to the extended store. This can reduce the size of your in-memory database.

[Credits: Dynamic_Tiering_Option_Master_Guide_en]

Dynamic Tiering Landscape Setup:

Dynamic Tiering feature [SAP HANA Extended Storage feature] is supported in since HANA SP09.

This blog is about My Experience on Dynamic Tiering Setup. This blog will helpful for you to make the Dynamic Tiering setup in your landscape.

I have two Linux Hosts [I named as LinuxHost1, LinuxHost2] with same configuration and same root user and password.

HANA Server will be installed on LinuxHost1 and ES Server will be installed on LinuxHost2.            
Both the components cannot be installed on same Linux machine.