Quantcast
Channel: SCN : All Content - SAP for Utilities
Viewing all 3025 articles
Browse latest View live

create a empty(blank) mandate

$
0
0

Hi experts,

 

At this moment, for SEPA implementation project, we need to create empy(blank) mandate where only the mandate ID is printed.

The form will be used by sales members (door knockers) to find new contracts.

 

At the moment the Business partner(customer) is signing the contract in paper (at his home or whereever), the mandate ID must be known.

 

At this moment, every mandate we create in our sap system for existing customers, among other fields the BP number is mandatory.

    

Without using Z-program is there configuration we can do to solve the issue ?

 

 

Thanks in
advance,


Contract Account creation event

$
0
0

Hi,

       We have a requirement to trigger an event which creates IDOCs for contract account master data. This event should trigger soon after the contract account creation in SAP (created manually by transaction CAA1). The idea is to replicate contract account master data in a third party system and keep it in sync with SAP contract account master data.

 

We tried a few events from FQEvents but they wouldn't trigger right after the contract account is created in SAP. Anyone knows of an event that can help? Any advice is appreciated.

 

Regards,

SAPUser

Electronic Bank Statements in FICA

$
0
0

Hi,

     I am trying to process electronic bank statement in FICA using BAI format and post Payments Lot/Returns Lots. I see that in FICA, FPB17 transaction process only Multicash file format. Does anyone know on how to proceed using a BAI file in FICA? Do we need a conversion program to convert BAI to Multicash? Please advise. Any help/lead is greatly appreicated.

 

Regards,

Praveen

BPEM EMMACLS Configuration

$
0
0

Hi

 

In SAP R/3 4.7 or earlier releases,Agent is not authorized to change the layout in EMMACLS Screen (which is secured).

 

But in SAP ECC,there is an option to change the layout by the Agent itself.In My scenario,i need to restict the agents not to move from one layout to other layouts because they are not intended to work on the clarification cases of different layout.Is there any way, we can achieve this functionality by removing the option of changing the layout to the agent?

 

I have checked in authorization Object B_EMMA_CAS .But actvt field is not providing this functionality.Please help on this?

 

 

Regards,

Surya

Payment Through FPY1 showing Error

$
0
0

Hi ,

i have created a payment method D for direct debit and assign this to contract account now i am taking payment through FPY1 but item is not paid error is showing in application log ...... Is there any configuration is missing ? but same payment method when assigned at document level then no error comes,please find screen shot below.

 

Data Migration Process for IS-U

$
0
0

In this post I will share my experience on the Migration process performed for Electricity and Gas Data, using Emigall tool-set for Utilities. I will walk through the road map we followed during the different stages of the migration life cycle. The primary focus will be on the use of the Emigall tool-set in the different phases of the migration cycle.

Before we proceed further, please read this fine print. All the views provided below are my personal opinion and doesn't necessarily reflect my employer's. I work for my employer as a Consultant with a focus in Utilities Industry but every customer's requirement is unique and you should definitely seek for professional opinion before making your business decisions. Treat the inputs in this blog series as just opinions, nothing more, nothing less.

 

SAP Data Migration Approach:

SAP recommends the Rapid Data Migration and Data Quality for the migration of non-SAP data to the SAP system.

They depict the migration process as below


Image1.png

SAP Rapid DM and DQ provide a migration framework, including documentation, templates, methodology, tools and expertise to analyze, extract, cleanse, validate, upload and reconcile legacy data and accelerate your migration project.

 

Details can be found in the SAP links as below

http://service.sap.com/bestpractices

https://websmp103.sap-ag.de/rds-dm2u

 


Migration Life cycle:

There are many resources available online on the Data Migration process, but here I will cover the following stages as road-map for Data migration life-cycle and how we used Emigall tool-set in each phase.

  1. Data Scanning/Profiling
  2. Data Identification.
  3. Data Mapping.
  4. Data transformation.
  5. Data Load.
  6. Data Cleansing.
  7. Data Reconciliation.

 

In my next blog I will write on the approaches taken for performance tuning the Emigall Loads.

 

1. Data Scanning or Profiling

1.1     Migration objects streaming

The object streaming happens in the Blue print Design Phase of the project. The Business streams are identified based on functional processes and data are categorized under specific streams. In our implementation we had profiled the data into the below streams.

 

  1. Front Office
  2. Device Management
  3. FICA
  4. Billing
  5. Service Location Master Data
  6. Address/Regional Structure
  7. Service Management


For the streams identified the next step is to classify each object within each stream at a higher level. Business Owner, functional and the Migration team go through the requirements and based on the outcome:

  1. Standard SAP Migration objects from the ISMW are identified.
  2. Custom objects are built to cater to the additional requirements.

    

An example list of objects that we identified and the associated stream is attached here

 

Image A.jpg
Image B.jpg


1.2 Object dependencies:

Dependencies set the relationships among objects. If the objects are migrated out of dependency, it could lead to serious issues with the data not meeting the required functionality. The worst case would be re-loading of the entire data all over again or a series of manual corrections.


The objects within each stream have dependencies and are also related to each other across the stream, leading to Cross Dependencies. These relationships are outcome of functional and business requirements.

 

We found that setting up the object relationship earlier in the blueprint phase worked well for us. A team of Business Owners, functional and Data Load team worked on establishing the relationships.

 

 

1.3 Object sequencing

Once the objects relationships are established, the cross stream objects need to be sequenced appropriately. This sequence is then followed during the loads. From the objects identified and the dependencies set the following sequencing was used for the Data loads.

Image2.jpg

 

Based on the object sequencing the load strategy was designed. The strategy had to ensure parallel loads of the objects and the best possible use of the background  work processes . Once the object dependencies and sequencing are identified the next stage is the Data Identification.

 

2.  Data Identification

The Emigall objects were identified, set up, and sliced and diced for loads in this step. The identified Emigall Objects were scrutinized to generate the required automation structures and the fields. The details of the Emigall configuration and the specific object configurations can be found in the SAP documentation from the below linkGuidelines_ISMW

 

Following the below steps worked well for us in setting up the Emigall objects:

  1. Data mockup was carried out from SAP GUI screen through the transaction codes, for e.g. FPP1 for creating the Contract Partner. This step helps in answering the below questions,

                what are the mandatory fields in the transaction

                what fields are expected to have values from the legacy system

                what are expected values for the fields, e.g. ID types

                how the final data should look like.

   2. From step 1 the Emigall Object was looked up for generating the automation structures and fields within each structure.

   3. Once the Emigall object is generated along with the data field, sample data file is mocked in Emigall, by creating Data import files from the Emigall transaction.

    Image 16.jpg
   4. Data loads are carried from the data import file. The load result should exactly mimic the data created from the front end in step1.

   5. Steps 1 to 4 can be iterative depending upon the Business scenarios and identified scope. Based on the scenarios, the field processing types ( rules, conversion set , initial values etc) are identified in this step.

   6. For objects that cannot be handled by standard SAP, custom objects are created. The guidelines documentation describes the creation of Custom objects in EmigallGuidelines_ISMW


These steps were carried out in presence of the functional folks and the data load team for each object.

 

3.  Data Mapping

This stage identifies the Mapping rules between the source system data and the target system. The target field values in Emigall are identified and appropriate mapping rules are established. As an example the Connection object Regional Structure data could be numeric codes in the source system while it’s configured as alpha numeric in the target system.


Emigall provides, known as processing types, to implement the mapping through rules, conversion set and fixed value. We considered these processing types only when the mapping rules were simple one to one mapping and did not have table fetch in SAP. More details and techniques can be found in the Guidelines_ISMW.

 

Image3.jpg

 


For complex rules involving look up tables and cross references, it worked well for us to implement them in the Data Transformation step.

 

4.  Data Transformation

Data Transformation is the meat of the Data Migration process. Files are generated at this stage based on the Emigall Data structures identified and the mapping rules defined. The outputs of the Data Identification and Data Mapping steps are supplied to the Data Transformation along with the extracted files from legacy system.


The Rapid Data Solution for Migration in Utilities details out the transformation process using BOBJ DS. Data Migration for Utilities

 

We ensured that the file extraction and subsequent transformation process were designed in such a way that the lag time/wait time for loading activity was minimal.


We considered having 3 mutual streams of Business Partner, Service Location Master, and Device Management at the Master data level ensuring some room of parallel loads. The transformation process was sequenced in such a way, that the files were transformed and ready for loading activity in either of the above streams without encountering idle time.

 

5.  Data Loading

The data loading activity was achieved in the following ways depending upon business requirements.

  1. Standard / Custom  Emigall objects.
  2. Custom Reports/BDC
  3. LSMW.

 

We considered building custom Emigall object rather than developing ABAP reports,  when standard SAP objects did not meet our requirements. The custom Emigall objects can reuse the capability of Emigall. Emigall is the powerful tool equipped with the exhaustive error logs, load restart capability, data analysis and faster means of loading which can be put to best possible use rather than building a custom report and testing them.


5.1.       Things to consider before the actual execution of Emigall loading:

    1. File Conversion: Standard file conversion program REMIG_FILE_TRANSFORM was used to convert the files into Emigall compatible (Migration) format. If the files are not delivered in the format expected by this program then custom program was developed for the purpose.
    2. File Size:Once the file is converted to Emigall compatible format standard program REMIG_PARSE_FILE was used to break down the file to smaller files of certain packet size. The smaller the file size the better the performance of the load jobs.
    3. Job Parameters:Appropriate object parameters were set before loading, for each object. The Commit interval for different objects differs. SAP recommends commit interval for each object was followed.Guidelines_ISMW.
    4. Job scheduler: Standard data import utility REMIG_JOB_SUBMIT was used to split the files and schedule the jobs by defining the work processes and the application server.
    5. Work Processes: The higher the number of Work Processes the higher the capacity to run parallel loads.

 

It worked well for us when the Data Load team worked with the Basis to determine the appropriate number of background work processes for each object load. We executed 2 to 3 different object loads in parallel with 100-110 background Work Processes in total.

Further details on the scheduling strategy can be found in the Guidelines_ISMW.

 
 

 

5.2        Job Monitoring tools:

The required tips and tricks for monitoring the jobs can be found in the Performance Guidelines for Emigall Perf_Cookbook but I would like to point out certain specific parameters we considered during our execution.

  1. Process Overview: The transactions SM66 and SM50 were monitored extensively throughout the data migration. Fine tuning was performed based on the observations in these transactions.
  2. Lock table overview DB01: Lock tables in active and wait state was monitored and traced for the jobs. Basis team performed monitoring the handling of the lock entries.
  3. DB Stats: The DB stats for the tables were monitored and updated consistently by the Basis team when the jobs were running. For intense jobs such as Device Technical Install and Billing install the same tables (EABL, EABLG, EASTL etc) gets updated frequently and required stats updated. Perf_Cookbook has the details on the required DB stats.
  4. Performance trace: During the test runs on high volume loads, the performance trace (ST05) was activated and monitored to see the DB tables being hit and the performance on these tables. Appropriate indexes were created / updated on certain tables depending upon the trace result.
  5. OS level: Basis monitoring the OS level parameters provides insights on the CPU Utilization, I/O load, network response, memory utilized and DB management and help to determine correct packet size and Work Processes to be used.We executed this exercise to determine the correct figures for Business Partner and Meter Read loads, which were processing intensive. This helped in bringing down the load time to fit the cut-over window.
  6. Error Statistics: Emigall provides a powerful tool to monitor the jobs. The throughput of individual jobs and the error count is monitored from this tool. EMIGSTATS transaction code is the door to this tool.

 

5.3.      Approaches to Load:

Two approaches to loads were considered

  1. Using Distributed Import
  2. Manual break down and scheduling

 

5.3.1     Using Distributed Import

Emigall provides the means to automate the distribution of the data import files during loads. It takes the migration converted file names for conversion and the Error file name. The split for the error files is automatically handled by the Standard SAP Master Program.

A screen shot of the auto scheduler

Image 6.jpg

Image 7.jpg

 

 

What worked for us

What did not work

The file break down is handled by the standard master program; all we have to give is the packet size per file.

It creates a single item in the statistics. The threads created by the split are not shown in the statistics, so the individual thread execution rate is unknown.

Once the work process distribution is input the master program takes care of allocating the processes when they are freed up or idle.

The stats get updated less frequently so at any instant the numbers of records loaded were not determined.

A single error statistics is seen which gets updated based on the commit interval set.

 

This method provides the means to file split the load and the error files and create single error file as the master job collects all the error files into one.

 

 

5.3.2       Manual break-down and scheduling.

For a larger volume file size the manual break down of the file into packet size and the scheduling using the standard program worked well for us.

It has the below steps

 

  1. Break down the file using the program REMIG_FILE_PARSE. Provide the packet size and names for the smaller files.

             Image4.jpg

    2.  Once the file break down is done, run the scheduler program with appropriate inputs to start the scheduler. The program name is REMIG_JOB_SUBMIT.

                image5.jpg

 

    3. The scheduler can be used to schedule the jobs by selecting the application server name, date and time for the job execution.

                  Image 8.jpg

 

What worked

What did not work

The manual scheduling of jobs gave a better control of allocating the jobs to the WPs.

The process had many steps compared to Distributed Import

The statistics gets updated for each individual job running in the WP giving an idea of the throughput each job undergoes.

Individual error files are created so sometimes in case of high error rate it becomes necessary to run the entire file with a single file in restart mode.

In case if the application server does not perform to the extent, this approach provided the ability to re-allocate the job to different servers by terminating just the required set.

 

 

 

We found the manual scheduling approach better suited for high volume loads. Also we were able to reduce the error count to less than 1% of the total file, so the restart of the entire file was very easy and the restart produced a single file in a short time.


Moreover it provided a better way to tag team with the basis and allocate the jobs to the desired servers without impact the run of the other parallel jobs. More details on the load scheduling can be found out in the Guidelines_ISMW.

 

5.4     Error Analysis and Migration Statistics:

Detail coverage of the error analysis and looking at the Migration statistics is given in the Emigall guidelines Guidelines_ISMW.

Below I will mention the steps we took to analyze the error messages coming out as load rejects

  1. The errors encountered for each of the load job was analyzed and statistics read from the error logs.

         Image 12.jpg

    2.   To get details on the errors we can highlight the error message and click on the long text to get the SAP suggestions on the error message. When the long text messages were quite obvious for the corrections to be done, this method worked very well to give us the first pass on the errors.

         Image 9.jpg

     3.   If the error long texts details were not sufficient, the next step was to run the particular record in a separate file in the online mode by setting the below parameter. This makes the execution of the data as if it is run as a BDC online. Uncheck the flag “W/o screens” and run the data file for the error record to find the error on the online mode.

          Image 10.jpg

       4.  In cases when debugging was essential debug mode was used, when the 1st three steps above did not suffice.

             Image 11.jpg

 

5.5.     Load Performance

Improving the Performance of the Load is the key to the effective Conversion completion in the cut-over window.We had to apply series of Performance measurement techniques to arrive at a total load time fitting the cut-over window. The load time only bettered for every load from the regression tests through the Mocks and the preferred one was during the go live.I will mention the tip and tricks that we approached to improve the load performance in a separate blog.

 
6.  Data Cleansing

After the initial set of data was loaded into SAP, the Emigall rejected data due to cleansing issues were corrected in Emigall through set of defining rules and were loaded into SAP by restarting the Error file load in Emigall.


This approach was taken to push in the Master Data of Business Partner, Devices, Installations and the Partner relationship and Business Partner Contacts.


Using Emigall to cleanse data was chosen as the time and effort that would have required to clean the affected data in the legacy system was considerable. Writing simple rules in Emigall and re loading just the error files was an easier way of approaching the migration.

 

Image 13.jpg


It’s very important to load as much of the data as possible, especially Master data into SAP, as it would give way for the downstream to get into the SAP. Else the downstream objects will be seen failing due to missing Higher Level Objects (HLO’s).


7.  Data Reconciliation

In this section I am mentioning about the data validation/reconciliation part that was performed by the load team to maintain consistency in the number of records.

 

Basically, with our number we looked to match the below equation


Number of records in the    =   Data loaded in SAP     +        Data rejected

Transformations file                  Through Emigall                      in Emigall

 

The above 3 parameters can be retrieved from the Emigall Statistics as below.

 

Image 14.jpg

 

The records migrated by Emigall get entered into the migration table TEMKSV per object under the user specific company. As soon as the migration is performed for each object the table count on TEMKSV has to be performed.

 

Image 15.jpg

 

A validation check has also to be made on the SAP DB table using query or a simple number count on the SAP tables involved.

 

In the end, load team can reconcile the data records as below for the end to end validation on the number of records created.

 

Data loaded in SAP        =          Number of records       =       Number of records in

Through Emigall                         in TEMKSV                              DB table in SAP.

 

Additionally we considered developing custom reports to perform the reconciliation of the financial data between  the legacy system and SAP. FICA Documents, Payments and Security Deposits were reconciled, ensuring that there is a penny to penny match in the migrated data.

 

 

I will continue on my next blog with the steps we took to improve the load performance.

 


BPEM Processor Rule

$
0
0

Hi All,

 

I am facing a peculiar problem with BPEM Processor rule.

 

I have defined a processor rule with some responsibilities.When i am triggering the case category from the function module BAPI_EMMA_CASE_CREATE(i.e case creation type : Manual),the processor rule is getting triggered and the clarification case is getting assigned to the respective agents.

 

I have changed the case creation type to Automatic for the same case category,then i tried  creating clarification cases throughthe tcode EMMA,Clarification case got created but the agent determination in not happening.

 

I am using same processor rule for both the scenarios.Its looks strange to me now.Can anyone suggest me if i am missing anything ?

BPEM/EMMA Case not getting created.

$
0
0

A customer originally active was CC XXX and there is now a pending CC YYY enrollment.  A TDSP charges idoc came in with start date lying before customer's enrollment with YYY and having end date after the enrollment so failed due to a time slice issue and the period overlaps 2 customers.  The case gets created in EMMACL with CC YYY which should be XXX,

But my issue is EMMA Case is not getting created in my case.


EA29 Mass bill print

$
0
0

We are running EA29 to generate RDI billing file. Now we have a requirement to skip certain customers. Is there a user exit that we can use for EA29 to filter contract accounts. Any ideas are welcome. Thanks

BAPI or FM to create Contact records

$
0
0

Hi,

 

I have the requirement where i have to create the contact records for the Business Partner whenever there is a call made to the customers, before specific processing starts. Its like the services we get from customer care in our day to day life by calls.

 

I am recieving the files which contains all the fields regarding to which contact records has to be updated.

In SAP system, i have to update fields like - Date , Time , Text field containing text 'Inbound/Outbound' based on Called Number , Another Text field containing text 'Callback attempts' based on no.of Callback attempts.

 

I already have contact action and contact class values provided to me.

 

These fields i have to update based on calls made and create contact records for the same in SAP.

I do have one BAPI - BAPI_BCONTACT_CREATEFROMDATA  which has date and time fields. But i am not sure where to pass these two text fields.

 

Can anyone please help as its urgently required. I am new to FICA so dont know much about the same.

It will be really thankful if response comes for the same.

 

Regards,

Ravi.

two payment runs: first and recurrent for SEPA mandates

$
0
0

Dears,

 

I'm using traénsaction FPY1 to run payment for sepa direct debit. I would like to generate two payement files:

 

one for the FIRST time excution for mandantes with 5 days as due date

another file for RECURRENT execution of mandates with 2 days as due date.

 

How can configure this situation in order to make difference.

 

At this moment, with the current set up all mandate are grouped together.

 

Thanks in advance,

 

Thierno

Table relationship for ISU-DFKKOP and CRM

$
0
0

Hi,

 

What is the table relationship for ISU-DFKKOP and CRM Table ?

 

 

Thanks,

Vimal Alexander

FM to calculate monthly consumption

$
0
0

Hi Exprts,

 

I have to find the monthly consumption for the given equipment from table EABL using current meter read or Previous meter read.

 

Is there any standard FM to calculate it?

 

Thanks,

Jasvinder kumar

FPCOPARA Job Log

$
0
0

Hi,

 

Due to some reason FPCOPARA job failing.

How to see the log for those entries?

 

Thanks,

Vimal Alexander

BPEM EMMACLS Configuration

$
0
0

Hi

 

In SAP R/3 4.7 or earlier releases,Agent is not authorized to change the layout in EMMACLS Screen (which is secured).

 

But in SAP ECC,there is an option to change the layout by the Agent itself.In My scenario,i need to restict the agents not to move from one layout to other layouts because they are not intended to work on the clarification cases of different layout.Is there any way, we can achieve this functionality by removing the option of changing the layout to the agent?

 

I have checked in authorization Object B_EMMA_CAS .But actvt field is not providing this functionality.Please help on this?

 

 

Regards,

Surya


FI-CA Write Off effect on Banking System cash flow

$
0
0

Hello Everyone,

 

When a Write Off is performed in ECC FI-CA on a contract created in BS (Banking System), there are no effects on the BS contract itself: the contract is not closed and nothing happens to the cashflow.

On the contrary, I need to close the contract created in BS after a write off is performed in ECC - FI-CA, or, alternatively, to show the write off effect in the BS Cash Flow.

 

Thanks and Regards.

 

Ermanno

FPCOPARA Job fail, Dump : message_type_x

$
0
0

Hi,

 

FPCOPARA job failing, it raised the dump : message_type_x.

We checked in ST22, error analysis says Internal error: Parameter X_REC_ADDR was not supplied in FM EFG_PRINT_EXPANDED.

 

Why the FPCOPARA job failing ?

How to resolve the issue ?

 

 

Thanks,

Vimal Alexander

Installation group order suppression.

$
0
0

A new secondary installation is moved in but the billing order is suppressed due to close proximity to a periodic billing.  This in turn prevents the primary installation from billing since all secondary installations haven't billed.  Is there a way to override the order suppression for installation group scenarios?

Command group in isu

$
0
0

Hello experts,

 

I want to use command group for street lights. for this i have created command and command group in isu.

 

Now i want to know how i will use these command group?

 

Please suggest me in detail.

 

Regards,

Priya

Electronic Bank Statements in FICA

$
0
0

Hi,

     I am trying to process electronic bank statement in FICA using BAI format and post Payments Lot/Returns Lots. I see that in FICA, FPB17 transaction process only Multicash file format. Does anyone know on how to proceed using a BAI file in FICA? Do we need a conversion program to convert BAI to Multicash? Please advise. Any help/lead is greatly appreicated.

 

Regards,

Praveen

Viewing all 3025 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>