Wednesday, 31 August 2016

SAP HANA XS Advanced Installation through resident hdblcm (Command based)

SAP HANA XS Advanced Installation:

Prerequisites:
SAP HANA 1.0 SPS11+, kindly upgrade your SAP HANA to 1.0 SPS11+ to use benefits of XSA

1. Download XS Advanced Run-time from SMP

SAP HANA XS Advanced Installation through resident hdblcm (Command based)

Tuesday, 30 August 2016

Data Loading to HANA using DXC Connection

As we know we have 3 types of data provisioning tools for HANA System
1.  SAP BODS – we can connect SAP and Non-SAP systems
2.  SAP SLT – we can connect SAP and Non- SAP systems
3.  DXC -  we can connect only SAP systems

Now will discuss about DXC connection extract data to HANA System

SAP HANA Direct Extractor Connection (DXC) is available as a simple option in ETL (batch) scenarios for data replication from existing SAP Data Source extractors into SAP HANA

Monday, 29 August 2016

Vora 1.2 Modeling Tool

SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.

Following image shows you where Hadoop fits in the Hadoop ecosystem:

Vora 1.2 Modeling Tool

Tuesday, 23 August 2016

New Hierarchy SQL enablement with Calculation Views in SAP HANA 1.0 SPS 10

SQL enabled hierarchies with SAP HANA Calculation Views


Modeling SAP HANA Calculation Views is the key approach to successfully exploit the power of the SAP HANA Platform and leverage key SAP HANA capabilities. With SAP HANA SPS 10 (find enhancement overview here), calculation views provide a deeper integration of hierarchy objects and their exposure for usage within SQL. By leveraging the SQL integration of hierarchy objects, hierarchy-based filters, aggregations and hierarchy-driven analytic privileges are enabled.

Monday, 22 August 2016

ALV and FPM on SAP HANA

With ALV and FPM lists, SAP provided very powerful and convenient tools to represent lists in SAPGUI, WebDynpro UIs or SAP NetWeaver Business Client. These tools have been very well adapted to the paradigm of databases being the bottleneck. In this context it is an efficient design to select the data into an ABAP table and to execute the UI operations like paging, sorting, aggregation and filtering on the ABAP application server. Handing over this internal table as data source to the reuse components ALV and FPM list is their basic principle and the reason why they can provide these operations out of the box: they have control over the result set.

Sunday, 21 August 2016

XS application for table distribution in scale out HANA system

HANA demands optimal distribution of data across all the HANA blades for best performance. A proper table distribution helps for more optimal load balancing and better parallelization. In this blog we will cover only the table distribution part. Table partitioning optimization will be described in another SCN article.
Here are several basic rules for general DB table distribution which this app follows:
  1. Large tables should not be replicated.
  2. Small tables can exist on different nodes to leverage more optimal joins and prevent network transfer between the DB nodes.
  3. Tables are distributed as evenly as possible.

Saturday, 20 August 2016

HOW TO GENERATE ROW NUMBER OR SEQUENCE NUMBER USING HANA GRAPHICAL CALC VIEW RANK

Created a Table name Country in SAP HANA and have following columns:
COUNTRY_NAME        VARCHAR (50)
COUNTRY_ID                INTEGER

HOW TO GENERATE ROW NUMBER OR  SEQUENCE NUMBER USING HANA GRAPHICAL CALC VIEW RANK

Friday, 19 August 2016

Licensing, Sizing and Architecting BW on HANA

I've had more than a few questions on BW on HANA Licensing and Sizing, and it seems that there isn't anything authoritative in the public domain. So here we go, but before we start...

Caveats


Architecting BW on HANA systems requires some care. First, database usage, number of indexes and aggregates, use of database compression, reorgs and non-Unicode systems all cause a variance in compression in the HANA DB. The best way to size a HANA DB is to do a migration.

Thursday, 18 August 2016

First Steps of Code Quality Analysis for SAP HANA SQLScript

One of these is SAP HANA SQLScript, which is used to develop high-performance stored procedures for the SAP HANA in-memory database. Unfortunately, SAP did not provide any static code analysis for SQLScript (in contrast to SAP Code Inspector for ABAP). Moreover, there are no precise guidelines how to develop good SQLScript code so far. In this post I'll present our initial thoughts on assessing the code quality of SQLScript.

The starting point to identify relevant static checks was the SAP HANA SQLScript Reference, which already mentions some (very general) best practices (chapter 13). Some of the recommendations there are very easy to detect automatically, e.g.

Wednesday, 17 August 2016

How to Define Role to different Server Nodes in Multi Node HANA System

The standard SAP recommended Node role would be as follows:

How to Define Role to different Server Nodes in Multi Node HANA System

In the above screen shot we have three nodes in which the first node has been set as Master node for Index and Name server.

Tuesday, 16 August 2016

Connecting SAP HANA Views to Sensor Data from Osisoft PI

Organizations across different industries leverage Osisoft PI systems in order to collect operational data (e.g. temperature, pressure, flow…) from sensors. These sensor data can be used to get a real-time view of the operational performance of assets, monitor the quality of products, or identify machine failures; just to highlight a few examples. However, sensor data from an Osisoft PI system can even be consumed in SAP HANA Views and thus enable new insights for business users.
In this blog, I give an overview on how these sensor data stored in an Osisoft PI system can be accessed from SAP HANA Views in real time via the SAP Manufacturing Integration and SAP Plant Connectivity solutions, without having to persist the data in the SAP HANA database. I focus on accessing the PI Data Archive and PI Asset Framework (PI AF) of an Osisoft PI system.

Monday, 15 August 2016

How and why to activate asynchronous queuing of IO requests

Why Asynchronous IO:

Undesired synchronous I/O can have a major impact on the HANA performance, especially restart time and table load time for read I/O as well as savepoint and write transaction commit times for write I/O.

HANA uses asynchronous IO to write to and read from disk. Ideally, asynchronous IO should have a trigger ratio close to zero. A trigger ratio close to 1 indicates asynchronous IO that behaves almost like synchronous IO, that is: triggering an IO request takes just as long as executing it and hence very prone to performance degradation of HANA system. In such cases we need to activate asynchronous IO for a particular file system/path.

Saturday, 13 August 2016

How to use HDBAdmin to analyze performance traces in SAP HANA

Most of the time, the Plan Visualizer is sufficiently powerful to understand what is going on inside of SAP HANA when you run a query. However, sometimes you need to get to a lower level of detail to understand exactly what is going on in the calculation engine.

It is then possible to use HANA Studio to record performance traces, and analyze them with HDBAdmin. This is a fairly advanced topic, so beware!

First, let's pick a query which runs slowly. This query takes 12 seconds, which is longer than I'd like. Admittedly, it's a tough query, grouping 1.4bn transactions and counting over 2m distinct customers.

Thursday, 11 August 2016

ABAP on HANA - Use Cases

Introduction to ABAP on HANA


Through HANA, SAP has brought forth a high performing multi faceted appliance with rich analytic computational capabilities, in-memory hardware, enhanced compression technology, geospatial capabilities, Text analytics and predictive analytics, to name a few. With so powerful a back-end, the application layer too had to be revised to fully leverage the enriched capabilities of HANA. CDS Views, AMDPs, and enhancements to existing Open SQL are the various available solutions which help in achieving Code Push Down – transfer the data intensive logic to the database resulting in better performance. AS 7.4 or above is the required version of application layer on a HANA database for the features mentioned throughout the document to work.

Wednesday, 10 August 2016

How to Trouble Shoot Statistic server migration Failed

Run the following SQL to determine the time it failed at

select
        value
from

_SYS_STATISTICS.STATISTICS_PROPERTIES
where key = 'internal.installation.state'

This will return the time that the switch failed at

Tuesday, 9 August 2016

How to connect Microsoft SSIS with SAP HANA

SSIS (SQL Server Integration Services) is a component of the MS SQL Server which can be utilized for various data migration tasks. This blog provides the step by step process with screenshots to implement a connection between SAP HANA and MS SSIS to perform data transfer.


Tools Required

  1. HANA Studio.
  2. MS Visual Studio 
  3. Business Intelligence tools for Visual Studio   
  4. HANA Client

Monday, 8 August 2016

HANA TA for Hybris Ecommerce - Why Google??

Context Setting


Alright, so lets pick an ecommerce site-- Say Levi's-Great Britain

HANA TA for Hybris Ecommerce - Why Google??

Saturday, 6 August 2016

Generate Time Data in SAP HANA - Part 2

Procedure:


1. Generate the master data from the specific time frame that you are interested in
  • On the Quick Launch Page > Data > Generate Time Data
Generate Time Data in SAP HANA - Part 2

Friday, 5 August 2016

Generate Time Data in SAP HANA - Part 1

In this document, I tried to explain "Generate Time Data" functionality. Through this, I am trying  to provide the general functionality and idea about the "Generate Time Data" with calender type "Gregorian".
In order to better understand how to use “Generate Time Data”, we are going to use standard table for the examples.

Note: While using this option you need to replicate the standard table into SAP HANA that is T005T, T005U, T009, and T009B. If these standard tables are not available then you will not be able use the “Generate Time Data” function.

Thursday, 4 August 2016

Expose Attribute Views as XS OData Service in HANA

1. You need to change the Eclipse IDE perspective to SAP HANA Development. Navigate to Window → Open Perspective → Other to change the perspective of your HANA Studio to SAP HANA Development   (OR)   Select the SAP HANA Development perspective in the perspective shortcut which is at the top right corner of your SAP HANA Studio.

Expose Attribute Views as XS OData Service in HANA

Wednesday, 3 August 2016

Modelling: Column to Row Transpose using Matrix in HANA

Background:


In almost every project, some form of data transformation is required. Most of these transformations are some combination of Aggregation, Projection, Union or Join steps and these transformation can be easily done in the HANA models.

Where possible, It may be a good idea to push the complex transformation to the data acquisition layer using ETL tools like data services.

Tuesday, 2 August 2016

What’s new in SAP HANA SPS12 – SAP HANA Graph Engine

What’s new in SAP HANA SPS12 – SAP HANA Graph Engine


SAP HANA Graph is an integrated part of SAP HANA core functionality. It expands (the functions of) relational database management systems, with native support for graph processing, and allows executing typical graph operations on all data stored in a SAP HANA system; which is automatically optimizes execution plans to execute highly performant queries, and built-in graph algorithms based on flexible schema property graph data model, to traverse on relationships without the need to predefined modeling and complex JOIN statements.

Monday, 1 August 2016

Table Valued - UDFs Vs Scripted Calculation Views - Rudimentary Exploration

'Use table functions instead of scripted calculation views (SP09)'

This sounded like, the coveted scripted calculation views, which were a part & parcel of any real-time customer implementation were facing an extinction threat! To fathom what that means, warranted some investigation and here's my attempt to get the basics in place w.r.t
User-defined Table-valued Functions (a.k.a TV-UDFs).