Saturday, 30 April 2016

SAP HANA Modeling - Project Learnings and Tips

Key benefits of Data Modeling in SAP HANA:
  • Building analytics and data mart solutions using SAP HANA enterprise data modeling offers various benefits, compared to the traditional data warehousing solutions such as SAP BW.
  • Virtual data models with on the fly calculation of results, which enables reporting accuracy and requires very limited data storage – powered by the in-memory processing, columnar storage and parallel processing etc.
  • Ability to perform highly processing intensive calculations efficiently – For example identify the customers where the sales revenue is greater than the average sales revenue per customer
  • Real time reporting leveraging the data replication and access techniques such as SLT, Smart data access etc.
Apart from the HANA sidecar or data mart solutions, HANA modeling also plays an essential role in the BW on HANA mixed scenarios, S/4 HANA Analytics, Predictive Analytics and Native HANA applications etc.

Friday, 29 April 2016

Move HANA data and log files to different mount point

I decided to write this because I ran into a space problem while trying to upgrade a HANA test system. Of course, it isn’t a certified appliance, but I had to move the data and log files to another mount point to free up enough space to complete the upgrade.

Moving data files to a different mount point is a common procedure for many databases, but typically isn’t an issue with HANA’s appliance model. HANA has a very specific hardware configuration with the persistent mount point being 4 times RAM. The introduction of the Tailored Datacenter Integration (TDI) may warrant this process a little more often. I’ve outlined the steps below to move the $(DIR_GLOBAL)/hdb/data and $(DIR_GLOBAL)/hdb/log directories to a different location.

Thursday, 28 April 2016

Vora 1.2 installation Cheat sheet: Concepts, Requirements and Installation

SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.

Concepts and Requirements:

Sap HANA VORA 1.2 consists of the two following main components:
  • SAP HANA Vora Engine: 
SAP HANA Vora instances hold data in memory and boost the performance.
  • SAP HANA Vora Spark Extension Library:
    • Provides access to SAP HANA Vora through Spark.
    • Makes available additional functionality, such as a hierarchy implementation.

Thursday, 21 April 2016

Code Push Down for HANA Starts with ABAP Open SQL

What is Code Push Down?
One of the key differences for developing applications in ABAP for HANA is that you can push down data intense computations and calculations to the HANA DB layer instead bringing all the data to the ABAP layer and the processing the data to do computations. This is what is termed as Code-to-Data paradigm in the context of developing ABAP applications optimized for HANA.

Where does Code Push Down Start?
It is a general misconception that if one wants to do code push down in ABAP for HANA you always need to either use HANA native SQL or build complex HANA artefacts in order to achieve this.
But in reality the Code Push Down for HANA  from ABAP can very well start with ABAP Open SQL. Let us see How and Why?

The New Enhanced Open SQL
It has been SAP's constant endeavour to improve Open SQL with each release of ABAP Application Server in order make it the most efficient channel for accessing data  in a manner that is database agnostic.

Wednesday, 20 April 2016

HANA: Lost Updates

Brief Intro of what happens in Lost Updates?

A simple example which will make you understand Lost Update.
For an example:
Session 1: User A reads record 1
Session 2: User B reads record 1
Session 1: User A updates record 1
Session 2: User B updated record 1
User B have not seen the record updated by User A and updated the existing record resulting in Lost Updates.

How can we tackle it programmatically?

Tuesday, 19 April 2016

Installing SAP HANA SPS 7 on AWS

You should be aware that currently, Amazon only provides SAP HANA Revision 68 as a ready to go installation image. This tutorial thoroughly explains how you can reinstall SAP HANA SPS 7 by yourself in order to replace your old version.

Launching an instance

First, we will install a SUSE Linux Enterprise Server 11 Service Pack 3 64-bit, sized m2.4xlarge with 68.4 GB storage space.

Installing SAP HANA SPS 7 on AWS

Thursday, 14 April 2016

RANK Function by SQL & Calculation View

RANK  Logic by SQL RANK Function & SQL Logic  & Calculation View Graphical & CE Function.

Scenario:
⦁ Consider a Non SAP load (e.g: Flat file load )  which Full load daily , gets loaded into HANA table.
⦁ Because of Full load , we get daily all transactions uploaded into HANA table, unless we implement any pseudo delta logic  in source side.
⦁ We have chance of getting same transaction multiple times from source file , if there were multiple changes on any key figures for same Transaction ID.
⦁ For Example , Order 100000, on created on date have Order Qty as 10 KG.
⦁ On same day or next subsequent day for above transaction there is  increase  in order qty from 10 KG to 20 KG.
⦁ So from NON SAP source we get these transaction multiple times with old Value & new value Order Qty with Different Time Stamp.

Tuesday, 12 April 2016

Introducing SAP HANA Vora1.2

SAP HANA Vora 1.2 was released recently and with this new version we have added several new features to the product. Some of the key ones I want to highlight in this blog are

  • Support for MapR Hadoop distro
  • Introducing new “OLAP” modeler to build hierarchical data models on Vora data
  • Discovery service using open source Consul – to register Vora services automatically
  • New Catalog to replace Zookeper as metadatstore
  • Native persistency for metadata catalog using Distributed shared log
  • Thriftserver for client access thru jdbc-spark connectivity

The new installer for Vora in ver1.2 extends the simplified installer to be able to use Hadoop Management tools like MapR Control System to deploy Vora on all the Hadoop/Spark nodes. This is an addition to what was provided in ver1.0 for Cloudera Manager and Ambari admin tools.

Monday, 11 April 2016

HANA Input Parameters with texts

Introduction

On one hand HANA Variables are easy to use and, mostly, perform well on well-modeled HANA views. There are always exceptions with big tables where join happens at a very granular level, then it is smarter to use Input Parameters, mapped through nested layers of HANA views instead of just a Variable in the topmost one. Performance-wise you can achieve a saving of 20-50% (personal observations), but then it comes a little tricky with introducing both keys and texts in Value Help.

An example:
  • You have a basic view joining the tables BSEG & BKPF
  • This basic view is used in Reporting view
HANA Input Parameters with texts
Basic view

Sunday, 10 April 2016

HANA Live implementation (sidecar scenario)

In this “how-to” post I am aiming to cover additional parts that arise while using HANA Live models in a sidecar scenario to your main (mature) SAP installation on a different DB. This blog post is of lesser interest for those cases, where an existing ECC / CRM system is migrated to HANA DB due to the fact that required tables are readily available under SAP_ECC / SAP_CRM schema. All you need to do is – install the HANA Live content and start using / modifying it.

Just a short recap, the scenario I am trying to cover here is the least-risk (and, possibly, cost) approach to leverage advantages of HANA with minor changes to existing ECC/CRM landscape, when an additional HANA instance is connected to the “main” system, running on traditional relational database. It may be an own HANA box, or any of the cloud solutions on the market.

Saturday, 9 April 2016

HANA CatEye! Experimental Project with NodeJS + MongoDB + RaspberryPI3

HANA CatEye! Experimental Project with NodeJS + MongoDB + RaspberryPI3

In my previous article,, I have explained BPC on HANA by using HANA objects and their advantages. This time i want to write about NodeJS which is going to be the primary Javascript runtime in HANA XS with SAP HANA SP11. Also I have developed a simple application by using NodeJS named as HANA CatEye.

Friday, 8 April 2016

BPC on HANA by using HANA Objects Part I

 BPC on HANA by using HANA Objects:

Our approach has two main advantages.
  1. Overall significant decrease on development time
  2. Huge performance increase
           The initial phase of the project was implemented in traditional way which was developing the business logic through optimizing the ABAP codes and parallel processes to maximize the performance and decrease the runtime process of packages. The runtime process was taking too much time due to huge amount of data.

BPC on HANA by using HANA Objects

Thursday, 7 April 2016

Expose CDS Views as OData Service

In continuation to previous blog Core Data Services in ABAP in this blog I will show how to create CDS Views and step by step procedure to generate OData service using CDS Views.

There are 3 different ways you can expose CDS views as OData service:
  1. Import DDIC structure using SEGW Netweaver Gateway service builder transaction.
  2. Reference Data Source using SEGW Netweaver Gateway service builder transaction.
  3. Using Generic Annotations. (@Odata.publish:true) 
First way is supported from SAP ABAP NW 7.40 SP5, Second and third ways are supported from SAP ABAP NW 7.50 and above. For first two ways SAP NW Gateway is used and for third type without use of SAP NW Gateway odata services can be created using annotations. Gateway is used only to add the service created using annotations.

In this blog I am going to show type one i.e how to generate OData service using import DDIC structure using SEGW service builder transaction.

Wednesday, 6 April 2016

SAP S/4HANA from a Developers Point of View - Part One

Introduction
In my first part I try to explain what is the main difference between R/3, SAP ERP and S/4.

SAP S/4HANA from a Developers Point of View - Part One

As you can see from the picture above R/3 was upgraded, extended and renamed to eventually SAP ERP. With upgrades from R/3 to the next version SAP mostly added/enhanced functionality and mostly kept all other functionalities as before. This allowed custom ABAP programs to continue to run with no or may be small adjustments.

Tuesday, 5 April 2016

SAP HANA Multitenancy Multi-tenant Database encryption with change of encryption root key for SYSTEMDB

Why This Blog:

In this case, the high level security measure is to enable Data Volume Encryption on the Hana system.

This is the first time I have enabled Data Volume Encryption with a Hana Multitenant Database.

After we have executed steps described in SAP HANA Administration guide, for enabling Data Volume Encryption, the alert '57' was raised in our SYSTEMDB reporting "Inconsistent SSFS". At this point our tenant DB was working without issues including backup. For system DB we were experiencing all symptoms reported by SAP Note 2097613.

Monday, 4 April 2016

Parallelization options with the SAP HANA and R-Integration

Why is parallelization relevant?

The R-Integration with SAP HANA aims at leveraging R’s rich set of powerful statistical, data mining capabilities, as well as its fast, high-level and built-in convenience operations for data manipulation (eg. Matrix multiplication, data sub setting etc.) in the context of a SAP HANA-based application. To benefit from the power of R, the R-integration framework requires a setup with two separate hosts for SAP HANA and the R/Rserve environment. A brief summary of how R processing from a SAP HANA application works is described in the following:

  • SAP HANA triggers the creation of a dedicated R-process on the R-host machine, then
  • R-code plus data (accessible from SAP HANA) are transferred via TCP/IP to the spawned R-process.
  • Some computational tasks take place within the R-process, and
  • the results are sent back from R to SAP HANA for consumption and further processing.

Saturday, 2 April 2016

Tableau and HANA

Introduction

Tableau is an easy to use visualization tool that allows you to connect to various types of data sources, visualize and create interactive dashboards and story points.

Connecting HANA and Tableau

After reading Ronald's blog, I thought connecting Tableau to HANA is very easy but when I tried to connect Tableau to HANA using Native Connector, I got the below error

Tableau and HANA