Posts

Showing posts from May, 2023

Abinitio Interview Questions 34

Image
                                             Abinitio Interview Questions 34 For Class Notes please visit:                     i/p a,a,b,c,d,a,b,c,d  record string("\n") input_str; end out::length(in)= begin out::length_of(string_split(input_str,','); end; field a a b c d a b c d RollUp  o/p a,a,a b,b c,c d,d INPUT-> NORMALIZE-> Rollup(field) For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Abinitio Interview Questions 32

Image
                   Abinitio Interview Questions 32                Watch my YouTube video for explanation :         Solve Using Abinitio Input DeptID  teacher    IsAssigned D1      Teacher1   1 D1      Teacher2   1 D1      Teacher3   0 D2      Teacher1   0 D2      Teacher2   1 D2 Teacher3   0          DeptID   Teacher1    Teacher2   Teacher3 D1 1 1 0 D2 0 1 0 Input--->Rollup(DeptID) teporary type= begin decimal("") Teacher1; decimal("") Teacher2; decimal("") Teacher3; end; out::initialize(temp,in)= begin out.Teacher1::0; out.Teacher2::0; out.Teacher3::0; end; out::rollup(temp,in)= begin out.Teacher1::if(in.Teacher=='Teacher1' and in.IsAssigned ==1) 1; out.Teacher2::if(in.Teacher=='Teacher2' and in.IsAssigned ==1) 1; out.Teacher3::if(in.Teacher=='Teacher2' and in.IsAssigned ==1) 1; end; out::finalize(temp,in)= begin out.DeptID::in.DeptID; out.Teacher1::temp.Teacher1; out.Teacher2::temp.Teacher2; out.Teacher

Abinitio Interview Questions 31 m_eval commands

Image
Abinitio Interview Questions 31 m_eval commands Watch my YouTube video for explanation :                            Quick Commands in Abinitio 35.1 concatenation of sring m_eval 'string_concat("abc","cde")' abccde 35.2 m_eval -print-type -no-print-value 3.14159 double 35.3 quick testing of functions m_eval -include $AI_XFR/myfunctions.xfr 'getRate(1890,'JAN')' 35.4 in the context of pset cat .project.pset DML|En|||sandbox/dml m_eval -context .project.pset "'\$DML is ' +  \$DML" "$DML is sandbox/dml" 35.5 m_eval 'lookup("ProductList","908").description' SSD Drive 987 35.6 m_eval '(date("YYYYMMDD")) (today() -10)'  for example if today=20230520 20230510 For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Abinitio Interview Questions 30 M_dump commands

Image
          Abinitio Interview Questions 30 M_dump commands Watch my YouTube video for explanation :                m_dump video done m_dump file.dml mfs -start 1 -end 10 m_dump loader.dml mfile:mfs8/tempfile.dat -partition 3  m_dump loader.dml mfile:mfs16/tempfile.dat -select 'empid==90' m_dump loader.dml mfile:mfs32/tempfile.dat -print-no-data print-n-records m_dump loader.dml mfile:mfs32/tempfile.dat print-n-records m_dump loader.dml mfile:mfs32/tempfile.dat -record 12 For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Abinitio Interview Questions 29 Dynamic Layout

Image
                                        Abinitio Interview Questions 29 Dynamic Layout Watch my YouTube video for explanation :                     What is Dynamic MFS layout - video done          What is it: fixed depth variable depth dynamic mfs - build-mfs -dynamic -fixed depth dynamic layout How it is being done: fixed depth mfile:dynamic:n  OR mfile:dynamic:$DEPTH  variable depth mfile:dynamic:-1:data-path[:MB-per-partition[:max-depth]] -1 meaning its COP decided the depth of paralleism at runtime not the user Creating Dynamic Single Directory MFS: build-mfs -dynamic -singledir mfs-depth 64 -mfs-mount s3://my-bucket/mfs-64way laypout:  s3://my-bucket/mfs-64way  OR mfile:dynamic:64 What are advantages of Dynamic Layout: a. Migration   b. Promotion c. Collaboration and Resue For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataP

Abinitio Interview Questions 28 Abinitiorc Files

Image
Abinitio Interview Questions 28 Abinitiorc Files Watch my YouTube video for explanation :                               What are the different abinitiorc files    30.1  Global Abitiorc file /etc/abinitio/abinbitiorc             At the level of host and it is accessible for all installation under the same host             a. its an optional abinitiorc file  b. it is global in nature as its declaration impacts all installation done under the host c. the variable of this can not be overridden d. can use include statement in this config file Use cases: AB_CHARSET AB_OUTPUT_FILE_UMASK UNIX Windows Super User(Admin)                     30.2. Server installation specific abinitiorc file $AB_HOME/config/abinitiorc     used for specific server and all users under the sever   a. Main configuration file for an individual cop installation b. Generatd when installation done c. Affects all users who uses that installation under the

Abinitio Interview Questions 27 Date Algorithms

Image
Abinitio Interview Questions 27 Date Algorithms Watch my YouTube video for explanation :                 Take the 2 dates and create end date  for first and last quarter. (First date of the quarter , Last Date of the quarter) There will be 2 output fields in the output file. DATE1  = 25102022/ 24022022 DATE2  = 24082023 / 11072023 fqdt=25102022  01102022 lqdt=24082023  30092023 Yr_part1=2022 = string_Substring(DATE1,5,4); Yr_part2=2023 = string_Substring(DATE2,5,4); let string(",")[int] quarter_dates1=["0101"+Yr_part1,'0104'+Yr_part1,'0107'+Yr_part1,'0110'+Yr_part1]; let string(",")[int] quarter_dates2=["3112"+Yr_part2,'3009'+Yr_part2,'3006'+Yr_part2,'3103'+Yr_part2]; for (i,i<4) begin qstartdt = if((date("DDMMYYYY))quarter_dates1[i]<(date("DDMMYYYY))fqdt) (date("DDMMYYYY))quarter_dates1[i]; qenddt = if((date("DDMMYYYY))quarter_dates2[i]>(date("DDMMYYYY))lqdt) (date

Abinitio Interview Questions 26

Image
Watch my YouTube video for explanation :                Get the unique values from the input data Input code, country 1, India 2, Mumbai 1, USA 2, Newyork 1,          UK 2, Edinburgh 2, London output code, countries 2, India,Mumbai 2, USA,Newyork 2, UK,Edinburgh,London type temporary_type= record string(",") l_countries; decimal("") cnt; end; temp::initialize(in)= begin temp.l_countries::in.country; temp.cnt::0; end; out::key_change(in1,in2)= begin out::if(in1.code==1); end; out::rollup(temp,in)= begin out.l_countries::if(temp.cnt!0) string_concat(temp.countries,',',in.country); out.cnt::temp.cnt+1; end; out::finalize(temp,in)= begin out.code::in.code; out.countries::temp.l_countries; end; output code, countries 2, India,Mumbai 2, USA,Newyork 2, UK,Edinburgh For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Abinitio Interview Questions 25 Day to Day Abinitio Commands

Image
     Abinitio Interview Questions 25 Day to Day Abinitio Commands        Watch my YouTube video for explanation :                     a. How to know depth of MFS    m_expand -n <mfs> b. air sandbox diff   ==> Display the differences between 2 graphs     comparing 2 files in different sandboxes air sandbox diff  <sandbox1-path>/a.dml <sandbox2-path>/a.dml comparing sandbox and TR file air sandbox diff my-sandbox/dml/b.dml comparing graph of sandbox with current eme version graph air sandbox diff -version current my-sand/mp/p.mp comparing TR version and sandbox graph air sandbox diff <version of eme>  my-sand/mp/p.mp c. count the no of fields in a file head -1 file-name | sed 's/|/\n/g'| wc -l d. plan-admin set parameter-name new-value e. m-password -prompt/-password   -restrict restriction | -unrestricted Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPun

Abinitio Interview Questions 24 Abinitio Parallelism Advanced

Image
Abinitio Interview Questions 24 Abinitio Parallelism   Advanced Watch my YouTube video for explanation :              Abinitio Parallelism component parallelism The AI component who increases component parallelism                         replicate input file reformat FBE PBE dedup sorted split The AI component who decreases component parallelism gather                 join fuse combine concatenate pipeline parallelism                         SORT ROLLUP JOIN Dedup Sorted Phases / Check Points component folding Continuous Graphs:  Checkpoint-->  Compute points   data parallelism types of layouts:     data layout     processing layout file:serial mfile:mfs /Adhoc multi file dynamic layout: fixed-depth dynamic layout    => mfile:dynamic:$DEPTH_OF_PARALLELISM  for ex. mfile:dynamic:16:/data/warehouse/2023/dec (note:Can be used to read and write) varia

Abinitio Interview Questions 23 Multifile System Part 2

Image
                           Abinitio Interview Questions 23 Multifile System Part 2 Watch my YouTube video for explanation :                Please look into the class notes here for your references: b. Use m_partition command m_partition <source-url-path> <destination-url-path> <DML> [KEY]    partitions  a serial file into a multifile    repartition a multifile into another multifile   c. m_mv, m_cp ,m_gzip,m_gunzip d. If same depth - copy serial location one by one (manual work around) target server: m_touch mfs-file-name src cp <partition#0 of source> <partition#0 of destination>  cp <partition#1 of source> <partition#1 of destination>  cp <partition#n-1 of source> <partition#n-1 of destination> 

Abinitio Interview Questions 22 Multifile System Part 1

Image
Watch my YouTube video for explanation :                                                                       a. Create a graph and use                                                                        <all-to-all> Input->Partitioning Component     <      =      >    gather/merge -> Output Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Unix Miscellaneous Commands | 10 Useful UNIX commands

Image
Unix Miscellaneous Commands  | 10 Useful UNIX commands For Class Notes: Unix  Commands and Use Cases- 1. tar command  what is it , why we use it tar <options> <archivename> <files to be archived> tar -xzvf myfiles.tar.gz tar -xzvf myfiles.tar.gz -C /dir/test tar -tvf myfiles.tar.gz tar -czvf myfiles.tar.gz /dir/test2 2. source command source abc.ksh 3. mount command   4. split split -l 200 myfile datapunit datapunita  datapunitb  datapunitc datapunitd datapunite split -b 200k myfile datapundit datapunita  datapunitb  datapunitc datapunitd datapunite 5. df /du df  mounted on , size in blocks , free , used , df --total du -d 1 /home/mandeep/test du -c -h /home/mandeep/test du -a -h /home/mandeep/test 6. sed '/exit/d' filename.txt 7. sed '2,$ s/unix/linux/' geekfile.txt 8. awk '/UUID/ {print $0}' /etc/fstab 9. finger David cat emailist.txt  a@gmai.com b@datapunit.org.in 10. echo -e "Hi There, \n Please Find the attached cash transaction  re

Different ways to create Abinitio MFS or How to create MFS in Abinitio

Image
Different ways to create Abinitio MFS | How to  Different ways to create Abinitio MFS | How Different ways to create Abinitio MFS | How to create MFS in Abinitio | multifile system in Abinitio For class notes as below: 1.  build_mfs -env <path-to-stdenv>/AB_ENV_ROOT data_areas disk1/p1 disk2/p2 -mfs-depth 4 Or single Directory MFS - hdfs , s3a,gs,azure(blob) - wasb wasbs azure datalake-abfss mvs s3a:////bucket/pdir/ gs://hotname/pdir/ wasb://containeraccount/dir/path build-mfs -basedir /abinitio/apps/ste-enable/dataPundit/stdenv -mfs-depth 8 -mfs-mount /abinitio/data/papaEarth/dataPundit/mfs -data-areas /abinitio/data/papaEarth/dataPundit/mfs/parts build-mfs -dynamic -singledir -basedir /abinitio/apps/sandboxes/papaEarth/Projects/sand/stdenv -mfs-depth 64 -mfs-mount /abinitio/data/papaEarth/dataPundit/mfs 2. m_mkfs  3. install_environement mfs-depth, data_areas 4. m_touch - //datapundit.in/user/nd/mfs/mfs-4way/dust/a.dat //datapundit.in/user/nd/mfs/mfs-4way/dust/n12/a.dat

DynamoDb Schema Design as an Expert Way

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: DynamoDB Schema Design      **Limit the access pattern **Throttling  meaning the reads are more than defined RCU **data latency , storage Use NOSQL WorkBench Create Directly in DynamoDB Use PartiQL   [Partition Key]           {Primary Key} [Partition Key,Sort Key]  {composite primary key} Partition Key  - Storage Index Sort Key  - Ordering  dept, opening_date 1, 12-03-2022 2, 22-09-2020 2, 21-12-2020 3, 17-06-2021 4, 16-05-2021 Partitions  --->partition 1    1, 12-03-2022 3, 17-06-2021 partition 2     2, 22-09-2020 2, 21-12-2020 Case I when we lookup of a known global unique key empid, joining _date 1, 16-05-2021 2, 21-12-2020 3, 17-06-2021 use the partition key Case II when the key is non unique OR when the range like queries are done based on some other value dept , opening_date 1, 12-03-2022 2, 22-09-2

Interview Question 18 AI Commands

Image
Interview Question 18   AI Commands Watch my YouTube video for explanation : Please look into the class notes here for your references: 23.1  Size of MFS file (No of records in MFS file) m_dump <dml> <url-path> | grep 'Record'| wc -l 23.2  size in bytes $[file_information("file-path").size] 23.3  total no of records in a mfs file  m_dump <dml> <datafile> print-n-records m_dump -string "   " <data file> print-n-records better use  m_dump <dml> <datafile> no-print-data print-n-records 23.4  listing of the files directory_listing("path",pattern="a*") For more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Interview Question 19

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: My I/p file is field 1,field 2     f1,f2 10,A 10,C 20,B  ==> 20,B 30,C 30,A    --->10,20,30 ----------------------> INPUT ->Reformat ->                                     --> FUSE the data                     --->A,B,C->next_in_seq->sort->C,B,A-> My O/P will be 10,C 20,B 30,A Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Interview Question 20

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: I/P airport,from,to  Mumbai,f1,f2             Mumbai,f2,f1  Delhi,f1,f2              Delhi,f2,f1 Pune,f1,f2 Bangalore,f1,f2 Chennai,f1,f2 Chennai,f2,f1 Hyderabad,f1,f2 Pune,f2,f1 Delhi,f2,f1 Delhi,f1,f2 Chennai,f1,f3 Chennai,f3,f1 o/p airport,from,to  (fields) Mumbai,f1,f2 Delhi,f1,f2 Pune,f1,f2 Bangalore,f1,f2 Chennai,f1,f2 Hyderabad,f1,f2 Chennai,f1,f3 -->  input as is        ->       INPUT -> Replicate(2) JOIN as Key(F1,F2,F3)  -> Collect Unused0,Output    -->  suffled col 2 and 3 -> Unused1 -----------------> Gather  --->Final Output Output-->Dedup on f1 ----> Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Interview Question 21 Nth Highest Salary Multiple Solutions

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: Get the employee getting Nth highest salary? provide multiple solutions. 1. Using dense_rank analytical function in sql 2. using sub query 3. using scan component in abinitio sol1 with T as   ( select *, dense_rank() over(order by sal desc) as RANK from employee    )  select name, salary from T where RANK=N Sol2 select name, salary from employee A       where N-1 = (select count(1) from employee B where B.salary>A.salary) sol3 scan , Assign Rank Flag and Filter in output_select function Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Write Excel Data in Abinitio |Write Excel Flow in abinitio | How to writ...

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: Write Excel files DML : utf8 record= begin string(‘\t’) sheet; string(‘\0’) line; end  Create Data num_records=2 out.sheet::if(index==0) 'customer' else 'revenue'; out.line:: if(index==0) 'customerid\tzip\taddress' else 'revenueID\trevenueAmt\trevenueDate' Customer Reformat out.sheet::'customer'; out.line::"'"+ "\t" + in.customerid              + "\t" + in.zip + "\t" + in.address + "'"; Revenue Reformat out.sheet::'Revenue'; out.line::"'"+ "\t" + in.revenueID              + "\t" + in.revenueAmt + "\t" + in.revenueDate + "'";   Parameters: xls:$AI_SERIAL/excels/relationsheep.xls write-mode:newworkbook/apend-records utf8:True date-format:yyyy-mm-dd autosizelumn:True record-delimiter:\0 use-format:True format-at

How to Create Local Secondary Index in DynamoDB | LSI in DynamoDb

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: Local Secondary Index: - FormPost   Table Partition Key= ForumName Sort Key (Range Key) = LastPostDateTime   {     TableName: " FormPost ",     KeyConditionExpression: " ForumName = :a and LastPostDateTime =:t",     ExpressionAttributeValues: {         ":a": "EC2",         ":t": "2015-02-12:11:07:56"     } } Only one sort key is applicable for a DynamoDB table so applying expression condition becomes difficult. Approach 1 – Scan it Read Capacity costs money – costly and poor latency Approach 2 -  FormName==’S3’   narrow down the data                        Subject ==’aaa’  - looks poor latency and also costly Approach 3. Create LSI on Subject   {     TableName: “ FormPost ”,     KeyConditionExpression: “ ForumName = :a and Subject =

How to create Global Secondary Index in DynamoDB

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references:  Global Secondary Index Employee Table EMPID, JOINING_DATE,  LOCATION 1 ,   20-02-2009, Bangalore 2,    12-03-2009, Mysore 3,    31-01-2018, Delhi 4,    27-02-2014, Gurugram 5,    19-04-2008, Pune 6,    18-07-2004, Bangalore Partition Key   = EMPID Sort Key     = JOINING_DATE other attribute = LOCATION    Query  Find out all EMPIDs whose location is Bangalore?    Approach 1   SCAN operation Filter Expression (LOCATION=='Bangalore')  Approach 2   Create a GSI , and with index name - lookup the data where LOCATION =='Bangalore'      Meaning of Creating GSI -         a. Create a new GSI Index meaning define a new partition key b. when we create a GSI then we create a new table with New Partition Key but keeping these 2 tables in sync   LOCATION , JOINING_DATE, EMPID Bangalore, 20-02-2009, 1 Mysore, 12-03-2009, 2 Delhi, 31-01-201

XML Combine | How to Write XML Documents in Abinitio

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: 2. Generate the XML data  - xml combine       a. Create the DML from the TARGET .xsd OR .xml exemplar file       b. Read the Data from AI native DML    c. create/map the data in vector or nested record formats        Case 1   Use Reformat if items are singular or one to one mapping  Case 2 Use Rollup   if requested format is denormalized form     d. Use XML Combine to produce the XML records   e. Use Output File to collect/create the data .xml file      Compoents used to create the xml data       XML Combine      input_dml - xml_compatible_dml.dml   output_dml - record  string('\n') ;  end; Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

DAX | DynamoDB Accelerator in AWS DynamoDB | Caching for AWS DynamoDB

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: DAX  DynamoDB Accelerator Cache:     Subset of Data (data storage layer) Stored in-memory to quickly retrieve and serve the content local or remote(DAX, i.e. multiple node working as a Unit) Application on EC2   --->DynamoDB | |    Cache           DAX Fully Managed , HA, Caching layer for DynamoDB Delivers Microsecond Performance Horizontally and Vertically Scalable Supports Get/Query/Scan   + Update/Delete/Put  + LSI GSI operations Easy Migration from DynamoDB to DAX Cache is hosted on nodes( Memory Optimized EC2)  , pay for per node When to use ?     Consistent/Burst traffic on the same set of keys require microsecond response times If your query case can tolerate eventual consistency  read intensive and not write intensive How DAX works Revenue Table       ---> DAX cluster                    Load Balancer ------------------

Abinitio Interview Question 2 | parallel processing | Multiple processing

Image
Watch my YouTube video for explanation : Abinitio Interview Question 2 | parallel processing | Multiple processing Please look into the class notes here for your references: How 100 multifiles/serial will be processed simultaneously     using Ab initio.      a. we need to read  100 files/mfs   b. We need to write 100 files/mfs   c. any other multi processing        revenue_file_apac.dat   revenue_file_nam.dat   .   .   .   revenue_file_sa.dat      Approach -   1. we can try creating plan       a. write a generic graph       INPUT-->PROCESSING LOGIC -->OUTPUT       create pset -  DML_NAME,INPUT_FILE_NAME,OUTPUT_FILE_NAME    b. create a plan                  vector of files        FILE_VEC=directory_listing("$INPUT_FILE_PATH","revenue_file*.dat");   . For each value Loop -      LOOP_VALUE_VECTOR =FILE_VEC LOOP_CONCURRENT=false/true AB_PLAN_LOOP_CURRENT_VALUE DML_NAME,INPUT_FILE_NAME,OUTPUT_FILE_NAME revenue_apac_rec_format.dml

Abinitio Interview Question 3

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: Prepare a DML which can read the records as follows. DML conditional $> cat Oregon-revenue.dat H,Revenue report for Oregon D,D1,D2,5000 D,D11,D22,6000 ... ... ... D,D1n,D2n,50000 T,24-07-2022,89765 record string(",") rec_type; if(rec_type=='H') string("\n") header; else if(rec_type=='D') string(",") data1; string(",") data2; decimal("\n") amt; else date("MM-DD-YYYY")(",") tr_date; decimal("\n") tran_amt; end; Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Abinitio Interview Question 4

Image
Watch my YouTube video for explanation : Please look into the class notes here for your references: process the following using abinitio  i/p student_id,sub1, marks1, sub2, marks2,sub3,marks3 1, ss, 5, sanskrit,6, maths,10 2, science,6, maths ,7, physics,12 output   stidentId sub    marks 1  | ss  |   5 1  | sanskrit  |   6 1 | maths| 10 2 2 2 filedName Filedvalue  stID sub1      ss          1 marks1   4           1 sub2   sanskrit    1 marks2   6           1 sub3      maths       1 marks3    10          1   ** creating multiple recs out of 1 rec - Normalize ** how many multiple rec from 1 rec - Length() out::length(in)= begin let decimal("") l_cnt=length_of(string_split(in.rec,","))/2; out::l_cnt; end; out::normalize(in,index)= begin let string(",")[int] l_rec_vec =  string_split(string_split(in.rec,",")); let int[] count_vec = for(i,i<(length(l_rec_vec)/2)):i+1; out.stidentId::in.rec

Abinitio Interview Question 5

Image
Watch my YouTube video for explanation  Please look into the class notes here for your references: Following is a scenario:  Transaction ID | Name | Transaction Amount | Address  ---  1 | ABC | 700 | A_1  1 | CDE | 500 | A_2  1 | EFG | 300 | A_3  1 | GHI | 900 | A_4  2| LMN | 300 | B_1  2| OPQ | 900 | A_B2 Qa. find the maximum transaction amout of a cutomer ID rollup Key = Transaction ID out::rollup(in)= begin out.*::in.*; out.Transaction_Amt::max(in.Transaction_Amount); end; Qb. Will rollup make sure that in the o/p the value of the  remaining fields like Name and Address will exactly  be the same ?  For ID = 1, will I get the Name as "GHI" and Address as  "A_4" ?  Fore more Abinitio, AWS and data engineering videos please subscribe , view , like and share my YouTube channel  Click  DataPundit

Integrate your REST API with AWS Services using API Gateway Service Proxy

Image
Watch my YouTube video for explanation  Please look into the class notes here for your references: api gateway service proxy, api gateway proxy, api gateway dynamo,  api gateway dynamodb, dynamodb api,  aws api gateway dynamodb, dynamodb api gateway, aws cloud, aws service cloud, aws tutorial, aws training https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html https://grey-water-550013.postman.co/workspace/3116b18c-69cb-48f5-82de-2cbed72724a0/request/create?requestId=17abc923-b907-435d-9de1-e571cb6a1b4d 1.  DynamoDB Table    employee      employee (empid as partition key, deptid as sort key) 2.  API Gateway Endpoint             *create API -               Create Resource,    Create Method ,    Configure Method for post operation to DynamoDB service,   Test the Method using JSON payload required        Deploy the Method   Get the URL    Use Postman to post the data payload (JSON payload) to API gateway   Receive the Respon