Test Estimation Tips to ensure Timely and Quality delivery

Gaining knowledge, is the first step to wisdom, Sharing it, is the first step to humanity…” – Unknown

There are several approaches available for test estimation. Some are very theoretical and do not fit in day-to-day project activities. Software projects are living in nature and it is not easy to back track once you build software because every change affects cost and schedule. Due to this dynamic nature of software projects, it is vital that estimation is done in a way that is practical and pragmatic for the quality and timely delivery of the project.

Before we dive in, Lets get some Estimation humors – Without humor the world is even boring, and this is just an article. wink

1.Any project can be estimated accurately…
…once it’s completed
2.Accurate estimation is not impossible…
…for the person who doesn’t have to do it.
3.The person who says it will take the longest and cost the
most is probably…
…the only one with a clue how to do the job.

Why Test Estimate is needed?

Two most frequent questions you get from your clients when discussing test engagements:

1.How long will your team take to complete testing?
2.How much will it cost?

Estimation is done often because it helps in forecasting how much will the project cost and when will the project get completed. Proper analysis and effort estimation is necessary for successfully planning for a testing project. Any flaw in critical estimation phase would result in missing the project deadlines, reducing ROI and lose of client’s faith.
What to Estimate?

  • Resources: Resources are required to carry out any project task. They can be people, tools, environment, funding, or anything else capable of being defined as requirements to complete a project activity.
  • Time: Time is the most valuable resource in a project. Every project has a deadline to delivery.
  • Skills / Experience: Skills can be defined by the knowledge and experience of the team members. Yes, this affects your estimation. For example, a team, whose members have low testing skills, will take more time to finish the project than the one which has high testing skills.
  • Cost

: If all the above factors are estimated correctly, the cost will automatically get generated.

In this article, We will try to learn some points which are helpful to prepare good test estimations. We will not be discussing the standard methods for test estimations like testing metrics (we will discuss those in a separate article), instead discuss some tips on –
Factors need to consider while estimating testing efforts:

1. How about a buffer time?

Having a buffer in the estimation helps you to cope for any delay that may occur and ensures maximum test coverage.

2. Do all builds come bug-free?

We should consider the fact that test cycle depends on the stability of the build. If the build is not stable, then developers may need more time to fix and obviously the testing cycle gets extended automatically. Always dedicate 20% of estimated time in Bug retest.

3) Availability of All Resources for Estimated Period

The test estimation should consider all the leaves planned by the team members in the next few weeks or next few months. This will ensure that the estimations are realistic. (Do consider some sick leaves as not everyone in your team is fit like Kohli or Dhoni).

4) Parallel Testing – Concurrency than Sequential?

Many times we come across a situation where we can test two or more thread-safe things at the same time. Try to keep it to minimum as multitasking could be hazardous if not executed properly. Consider it only after analyzing the situation.

5) Estimation is not one time Task– So re-visit the estimations regularly

We should frequently re-visit the test estimations and make modification if needed. We should not extend the estimation once we freeze it, unless there are major changes in requirement.

6) Do You Know Your Team?

If you are aware about the strengths and weaknesses of members working in your team then you can estimate testing tasks more precisely. While estimating one should consider the fact that all members may not yield same productivity level. Some QA’s can execute faster compared to others. Though this is not a major factor but it affects the estimation.

7) Historical data of the previous estimation for improvement and accuracy

Experiences from past projects play a vital role while preparing new time estimates. We can try to overcome the difficulties or issues we faced in past projects. We can analyze how the previous estimates were and how much they helped to deliver product on time. Again, please keep one thing in mind – not every project is the same, so do not apply this tip everywhere.

Apart from the aforesaid 7 points, of course do consider the scope of your project to identify – What to test and what not to test. Also, rough calculation, approximation, educated/informed guess, rough guess are the synonyms of Estimation. So, don’t strive for the accuracy & be adaptive.
Bye-Bye Note:

According to functional decomposition framework, not everything should get incorporated in single article.
So, I will be writing a separate article (may be in a month or two) to discuss some standard Testing Techniques which we can use to estimate our testing tasks. The following estimation techniques have been compiled from several sources:

1.Delphi Technique
2.Wideband Delphi (WDS)
3.Work Breakdown Structure
4.Three-point estimation
5.Function Point/Testing Point Analysis
6.Percentage of development effort method

Stay Tuned! for next article… and do comment if you have any other tip on estimation of tasks in better and efficient way.

Things to look for while testing an e-commerce site!

The better the site, the better the business.
An exemplary experience of an e-commerce site is of paramount importance because if that’s not the case the user might just leave with a bad impression for the site & never come back again. Hence, it becomes important to make sure that it’s major features are correctly implemented and are functioning as per the requirement.
Since so much depends on the user experience, it becomes important that the e-commerce site undergoes thorough testing.
A core user journey flow would cover login/signup, searching for a product, adding the product to cart, filling up the payment and order details and making payment.

So here is a list of scenarios which can be used as a guide to get started with testing the e-commerce site.

User creation and login

Most of the e-commerce sites allow the user to purchase an item as a guest user, the registered user or via any social network. So it’s important to consider all of these conditions when testing.

1.As a guest user: – Validate that the user is able to purchase the item without having to create an account.
2.Registered user: – Validate that the user is able to purchase an item with an existing account and with a newly created account.
3.Create an account or log in during the checkout process and validate that the selected item gets added to the shopping cart.
4.Login via any social network and validate that the user is able to make payment.

Searching, Sorting, Filtering and Pagination

The search feature allows the user to select the desired product from the thousands of products present on the site. Hence it’s important that the search shows relevant results.

1.Perform a search by providing the product name, brand, category or sub-category. Validate that the search result page displays products that satisfy the search criteria with the most relevant products on top.
2.On the search result page and the category page, validate that the filters functions as expected and displays the desired results. Apply both, single filters and multiple filters.
3.Similarly, validate that the products are sorted according to the sorting option chosen and when paginating make sure that the sort order and filters applied remains.

Test your cart

Case 1: The Happy Flow.
Needless to say, the user should be able to add the desired product(s) in the shopping cart and proceed for payment as a registered user. Validate the same for the guest user.
Further, after the payment is successfully made, validate that the status of the bought product(s) gets updated accordingly under ‘my orders’/’my bookings’.

Case 2: Updating the products in the cart.
After adding the product(s) in the shopping cart, update the shopping cart before proceeding for the payment. The following scenes can be considered for the updating the cart:
Increase the quantity of a product from the cart.

1.Add a new product
2.Remove an added product from the cart
3.Add the same item multiple times

For each of the above-mentioned scenes validate that the total amount displayed is correct.

Case 3: Product Inventory

1.After the payment is made for a product, validate that the inventory of the product gets updated in the database.
2.Buying a product for quantity which is greater than the quantity of product available.
3.Verify that the product with inventory=0 is displayed as out of stock to the user.

For both case 2 and 3 an alert message should be displayed to the user when trying to add a product to cart.

Case 4: When two different users try to buy the same product simultaneously with only 1 item left.
One of the scenario for this case can be when two users try to buy the same product simultaneously with just 1 item in stock. User 1 and User 2 adds the product to the cart. While user 1 continues to add more products in the cart, user 2 makes the payment for the product. In such a case, an alert message should be displayed to the user 1 when he proceeds to make payment.

Case 5: Failed transaction
When the transaction is canceled during checkout or on the payment gateway page because of poor net connectivity, wrong card details provided, the user deliberately cancels the transaction or any other reason, validate that the status of the product(s) in the cart is changed accordingly.

Case 6: When the user does not proceed to make payment after adding a product to cart.
For the above case validate that the added products should be displayed to the user in his cart, everytime he logs in.

Case 7: The ‘Wishlist’
1.The wishlist feature provides an improved e-commerce experience and makes it easier for the 2.customer to save the product(s) and return to it later. The scenarios for the wishlist feature can be:

For a logged in user: Add an item to your wishlist and validate that the same is present in your wishlist the next time you log in.
Validate that a user is able to move the product(s) from the wishlist to the shopping cart and proceed with the payment.

Apart from the above-listed scenarios, there can be many others. If you can think of any, please feel free to add it in the comment section.

Creating Desktop App With Ember, ExpressJS, Sqlite3 Database and Node-WebKit

Assumptions:

  • Frontend is Ember and backend is Express.
  • NW (node-webkit) version is 0.20.1 (install using npm install -g nw@0.20.1)
    Node (node) version is 7.0.0 (install using nvm install v7.0.0 which internally download node-sass with file node-v51-linux-x64)
  • NW-gyp (nw-gyp) (install using npm install -g nw-gyp used for building node-webkit from source, but by default it uses latest node version for compilation. In my case it uses v7.0.0. Due to it I upgraded my
    project node version to node v7.0.0)
  • Frontend compiles ember project and store compiled index file to backend/public/index.html

Initial Setup

cd /home/ubuntu/project_name/backend
nvm install v7.0.0
nvm list
nvm use v7.0.0
npm install -g nw@0.20.1
npm install -g nw-gyp

Nodemon is used for running backend express server using nodemon –exec ‘bin/www’ in backend
npm install -g nodemon

Install npm packages mentioned in package.json
npm install

As by default sqlite is fetched with compiled for node. And as we require to compile for node-webkit so uninstall sqlite3
npm uninstall sqlite3 –save

Compile sqlite3 for runtime as node-webkit with target architecture is of 64 bit system and target version of node-webkit as 0.20.1
npm install sqlite3 –build-from-source –runtime=node-webkit –target_arch=x64 –target=0.20.1 –save

Move sqlite compiled node file from node-webkit-v0.20.1-linux-x64 to node-v51-linux-x64
mv /home/ubuntu/project_name/backend/node_modules/sqlite3/lib/binding/node-webkit-v0.20.1-linux-x64/node_sqlite3.node
/home/ubuntu/project_name/backend/node_modules/sqlite3/lib/binding/node-v51-linux-x64/node_sqlite3.node

In package.json file add following fields
In backend package.json

{
“main”: “http://localhost:3000”,
“node-main”: “./bin/www”
}

To run node webkit for current project execute nw

Preceding-sibling and following-sibling in xpath

How to use preceding-sibling and following-sibling in xpath to find sibling nodes:

hstplblog_1

Xpath: “//ul/li[contains(text(),’doprep’)]/preceding-sibling::li”

This will give “Hardshell”

How to get all the following siblings of doprep

Xpath: “//ul/li[contains(text(),’doprep’)]/following-sibling::li”

This will give all the preceding siblings ( soldier2ndlife, savior)

There is trick to use preceding-sibling and following-sibling. the way you place them matters – So, when you use them at beginning they would give you reverse result.

For Ex:

When you use preceding-sibling at beginning then it will give you the result as ( soldier2ndlife, savior) instead of Hardshell.

Xpath : “//li[preceding-sibling::li=’doprep’]”

This will give soldier2ndlife, savior.

when you use following-sibling at the beginning then it will give reverse result. Instead of giving all below nodes of doprep, this will give Hardshell.

Xpath: “//li[following-sibling::li=’doprep’]”

Now the question is how to get all the nodes between doprep and Savior.

Xpath : “//ul/li[preceding-sibling::li=’doprep’ and following-sibling::li=’Savior’]”

This will return soldier2ndlife

or

Xpath : “//ul/li[preceding-sibling::li[.=’doprep’] and following-sibling::li[.=’savior’]]”

or

Xpath: “//ul/li[preceding-sibling::li[contains(text(),’doprep’)] and following-sibling::li[contains(text(),’Savior’)]]”

Not every time, we can use absolute xpaths, there are cases where we need to use relative xpaths for dynamic elements. Such concepts may help an automation engineer to script quickly.

Although, chrome and firefox consoles provide you to extract the xpaths on a given page, but if you want to practice more xpath expressions, you may use the below test bed:

http://www.whitebeam.org/library/guide/TechNotes/xpathtestbed.rhtm

MongoDB Replica Set and Related Memory Issues

For better understanding I have divided this post in two parts :
– In first part I will try to explain Replica set and how it works.
– Secondly I will explain ways to manage storage space.

MongoDB is an open-source document database (NoSQL) that provides high performance, high availability, and automatic scaling. MongoDB is written in C++.
A record in MongoDB is a document, which is a data structure composed of field and value pairs. MongoDB documents are similar to JSON objects. The values of fields may include other documents, arrays, and arrays of documents

Key features :

High performance data persistence.
Rich query language to support read and write operations ( CRUD) as well as text search and data aggregation
High availability through replication feature called replica set.
Horizontal scalability through Sharding feature which distribute data across a cluster of machine.

Replication feature in MongoDB

Consider an App whose data is present on a single mongoDB server and server suddenly crashes leading to corruption of entire data on that server due to server outage. In this situation our entire application data is lost and business will end up nowhere. Therefore every organization needs to keep at least one copy of data on another server which is possible through Replication feature of MongoDB. Therefore with the help of data Replication it is possible to create additional copies of the data, which we can use for continuity of business environment or disaster recovery or backup.

Advantages of Data Replication :

  • Best practice to keep business crucial data safe.
  • Redundancy and increases data availability ( 24*7)
  • Helps additional copies of the data, which can be used for continuity of business environment or disaster recovery or backup.
  • Replication can provide increased read capacity as clients can send read operations to different servers.
  • No need for downtime in case of maintenance such as data backups, data compaction and index rebuilding activities.

Replication Process in MongoDB
Replication in MongoDb can be achieved through replica set. A replica set in MongoDB is a group of mongod processes that maintain the same data set. Hence a replica set contains several data bearing nodes (mongod processes) and optionally one arbiter node. Of the data bearing nodes, one and only one member is deemed the primary node (Master), while the other nodes are deemed secondary nodes (Slaves).
The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members.

Members of replica set

  • Primary (Master) :The primary is the only member in the replica set that receives all write operations. MongoDB applies write operations on the primary and then records the operations on the primary’s oplog (Operation Log).
    All members of the replica set can accept read operations. However, by default, an application directs its read operations to the primary member.
  • Secondary (Slaves) : A secondary maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s oplog in asynchronous process such that the secondaries’ data sets reflect the primary’s data set. A replica set can have one or more secondaries.
  • Arbitor : You may add an extra mongod instance to a replica set as an arbiter. Arbiters do not maintain a data set. The purpose of an arbiter is to maintain a quorum in a replica set by responding to heartbeat and election requests by other replica set members.
    Because they do not store a data set, arbiters can be a good way to provide replica set quorum functionality with a cheaper resource cost than a fully functional replica set member with a data set.
    Only add an arbiter to sets with even numbers of voting members. If you add an arbiter to a set with an odd number of voting members, the set may suffer from tied elections.


Automatic failover in Replication
There might be situation when primary becomes inaccessible. When a primary does not communicate with the other members of the set for more than 10 seconds, an eligible secondary will hold an election to elect itself the new primary.
The first secondary to hold an election and receive a majority of the members’ votes becomes primary.

Although the timing varies, the failover process generally completes within a minute. For instance, it may take 10-30 seconds for the members of a replica set to declare a primary inaccessible. One of the remaining secondaries holds an election to elect itself as a new primary. The election itself may take another 10-30 seconds.
While an election is in process, the replica set has no primary and cannot accept writes and all remaining members become read-only.
Managing Disk Space in MongoDB

When documents or collections are deleted, empty record blocks within data files arise. MongoDB attempts to reuse this space when possible, but it will never return this space to the file system. This behavior explains why fileSize never decreases despite deletes on a database.

Let’s say I have 20GB of data in a MongoDB database, and I delete 5GB of that data. Even though that 5GB of data is deleted and only 15GB of data is actually present in database, unused 5GB will not be released to the OS. MongoDb will keep on holding the entire 20GB Disk space it had earlier, so that it can use the same space to accommodate new data. This used disk space will keep on increasing and will never be released.

Remedy
There will be situations where we don’t want to let MongoDB to keep hogging all disk space to itself. Depending on setup and the storage engine used for MongoDB, we have a couple of choices.

1. Compacting individual collections:
Compact command can be used to compact individual collections. This command rewrites and defragments all data in a collection, as well as all of the indexes on that collection.

Important : This operation blocks all other database activity when running and should be used only when downtime for database is acceptable. If running a replica set, we can perform compaction on secondaries in order to avoid blocking the primary and use failover to make the primary a secondary before compacting it.

The compact command works at the collection level, so each collection in database will have to be compacted one by one. This completely rewrites the data and indexes to remove fragmentation. In addition, if storage engine is WiredTiger, the compact command will also release unused disk space back to the system. If storage engine is the older MMAPv1 though, it will still rewrite the collection, but it will not release the unused disk space. Running the compact command places a block on all other operations at the database level, so we have to plan for some downtime.

2. Repair (Compacting one or more databases)
For a single-node MongoDB deployment, we can use the db.repairDatabase() command to compact all the collections in the database. This operation rewrites all the data and indexes for each collection in the database from scratch and thereby compacts and defragments the entire database.

To compact all the databases on server process, we can stop our mongod process and run it with the “–repair” option.

Important

This operation blocks all other database activity when running and should be used only when downtime for database is acceptable.
Running a repair requires free disk space equal to the size of current data set plus 2 GB. We can use space in a different volume than the one that mongod is running in by specifying the “–repairpath” option.

3. Compacting all databases on a server by re-syncing replica set nodes
For a multi-node MongoDB deployment (Replica set), we can resync a secondary from scratch to reclaim space. By resyncing each node in replica set we can effectively rewrite the data files from scratch and thereby defragment database.

Please note that if cluster is comprised of only two electable nodes, we will sacrifice high availability during the resync because the secondary is completely wiped before syncing.

MongoDB Replica Set : Database size difference between Primary and Secondary Node
You could face a scenario where Secondary node database is of higher size than the primary one. Both nodes could have the same number of objects, but the values of “avgObjSize”, “dataSize”, “storageSize” are higher for secondary node. There could be no replication lag as well, when checked from rs.stats() command.
What could be the reason for this?

dataSize() : It returns the the size of the collection.
avgObjSize() : It returns the average size of an object in the collection.
storageSize() : It returns the total amount of storage allocated to this collection for document storage.

Reason : Because of different amount of not reclaimed memory space on secondary and primary

Suppose we have a replica set with one primary and one secondary node. Lets say this primary has been primary node from a long time where some documents were deleted and inserted, but no compact operation was run. Free space in primary would not be reclaimed by OS, and would be counted in dataSize, avgObjSize and storageSize.

Secondary node could be fully resynced from primary, but only operations from current oplog would be replayed in it. In this case secondary could have lower values for dataSize, avgObjSize and storageSize.

If after that secondary is elected as primary, you would see described difference in sizes. Both nodes would have the same number of objects, but the values of “avgObjSize”, “dataSize”, “storageSize” would be higher for secondary node.

Role of a Tester as an Agile Team Member

Historically, the responsibility of tester was limited to proving the requirements are met, ensuring that the software works and finding bugs just in an almost completed product. And so a role of the tester used to come into picture only after the development cycle was completed.
But since Agile is all about being flexible enough to cater all the changing requirements and the process or tool is driven development is less responsive to the changes and less likely to meet the customer requirements, hence Agile Manifesto flips around titles and says to focus on individuals and interactions over processes and tools.
And so, the role of a tester in agile goes beyond “just testing” and logging bugs. They should work with everyone in the team to build and improve the quality of a product as early as possible. Hence they should have the ability to wear many hats & support other team members i.e. if they see a task that needs to be done and have skills to do it, they must do it regardless of their titles.

Below are few roles Navyug’s testers often take up & you can see them fulfilling the inherited responsibilities splendidly:

i. The tester as an Architect:

As a tester, they participate in the design meetings which happen every now and then before the features are written down as stories. They contribute to design meetings asking “What if … ” questions and creating a modal of how the system will look like. This also helps to identify the dependent areas in the future in case of any changes or bug fixes or find the ambiguity in the features before the testing even begins.

Responsibilities:
• Define “done” for every story by listing down the acceptance criteria for each of them.
• List down the scenarios that they will be including as a part of testing a particular feature.
• Create mockups for the upcoming features.
• Suggest improvements.
• Offer suggestions to ease testing if the development gets completed in a certain way.

ii. The tester as an Explorer:
Exploratory testing uncovers bugs that no automated tests will ever find. Also due to tight timelines, exploratory testing becomes a valuable skill. Being an explorer means, mapping the system on the run, they discover interesting information about the system and reassess the moves. It is more about what the product actually does rather than what we believe it should do as per the documents.
Being an explorer does not mean they approach the system in an unplanned or unstructured manner. It means they behave more systematic and document whatever they have tested, as they discover more about the system.

Responsibilities:
• List down all the scenarios covered during exploration.
• Discuss all new scenarios discovered during exploratory testing, with the team.

iii. The tester as a Reporter:
Tester’s responsibility is not limited to sharing simple pass/fail details, they also share valuable information like:
• Issues or bugs they have uncovered.
• Areas they didn’t have time to explore.
• Recurring defects that could be prevented.
• Blocks, wait for states, and other problems obstructing additional testing.

A good tester always keep his/her team informed about their findings – issues or bugs found should not come as surprises to other team members during scrum meetings.

Responsibilities:
• Reporting defects.
• Work with team to resolve them by reporting maximum possible scenarios around any uncovered bug.
• Create a bug report and share it with the team by end of every sprint.
• Regularly discuss bugs that need urgent attention, with the team, by scheduling a triage meeting.

iv. The tester as an Automator:
Knowing how to code is always a useful skill and can be of great help to the team. But one doesn’t need to be a skilled programmer to start using automation. There are many frameworks for automation that use English words and phrases like Gherkins etc.
Automating the tests not only reduces redundancy of work (especially during smoke) but also saves a lot of time and improves the efficiency of the tests.

Responsibilities:
• Write an automation script.
• Execute the script.
• Update the script with changing requirements.
• Create a dashboard to display the result of every run of the automation script.

CSS Sprite Images benefits

In almost all the web application, we use many icons to make our pages look elegant. This might be good for the UI/UX but it has a huge implication on the network performance.
Consider a page which shows 45 various services provided by an organisation. There will be 45 network calls required to get all the icons. Each browser opens a certain maximum number of connections in parallel to optimize its performance (Chrome opens max 6 connections at a time).
One way of getting all the images at once is to use domain sharding, in our case we need 45/6 = 8 (rounded off) different domains to serve all the icons at once.

Another and the popular way of achieving it is by using Sprite Images. Its a single image which contain all the icons in it. CSS can be used to display all the icons at their respective places in the web page.
The following link has 33 icons which are served through a single Sprite image.

Max hospital

This is the sprite image used

Sprite image

How to create and use Sprites:

There are a lot of CLI/GUI tools to generate the sprite image. The above image has been used using Python based CLI tool called Glue.

Glue Website

Just pass the directory of icons and it will generate a PNG image with all the icons and a CSS file with all the image names as classes. Use these classes in HTML where ever you want to show the icons. The sprite image and CSS file can be customized in several ways as required by the developers. Please refer its documentation for all possible options.

Example usage of the tool :

glue-sprite icons/ –img=assets/images/ –css=. –sprite-namespace= –namespace= –force

Explanation of the command:

icons/ – directory with all the icons

–img = assets/images/ – the sprite image will be created under this path (this path is used so that generated CSS classes also get the assets/images path – standard path for serving images through rails).

–css = . CSS file will be created in the current directory.

–sprite-namespace = to append anything in class name

–namespace= to prepend anything in class name

–force – not to cache while creating sprite.

A sample of CSS file generated:

.wi_fi_zone,
.blood_bank,
.default_service,
.default_specialty,
.mobile,
.email{background-image:url(‘/assets/images/icons.png’);background-repeat:no-repeat}
.wi_fi_zone{background-position:-243px -30px;width:28px;height:23px;}
.blood_bank{background-position:-243px -53px;width:18px;height:28px;}
.default_service{background-position:-243px -81px;width:21px;height:23px;}
.default_specialty{background-position:-243px -104px;width:21px;height:23px;}
.mobile{background-position:-243px -127px;width:22px;height:22px;}
.email{background-position:-243px -149px;width:16px;height:14px;}

Postgres Continuous Archiving & Point-in-Time Recovery

Why do we need a Backup/ Recovery Strategy ?
Backup and Recovery Strategies are absolutely essential for uninterrupted operation of any live business unit. The strategy must plan for recovery in every catastrophe such as:

  • Device Failure: Loss of Machine or Disk
  • Failure during Maintenance: Hardware or software upgrades
  • Site Failure: Failure at Datacenter or a network failure
  • Blunders by Operators: Devops/ System operator drops a table/ schema/ datafile. Github is a recent example – https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/
  • Data Corruption: Application introduces poor code and in turn corrupts the data or Disk gets corrupted
  • Compliance: Data Retention Periods, Storage of Readable and Writeable data.
  • Profitability guides the strategy – Business Impact Vs Cost. Decision revolves around questions like:

    1.How long will recovery take?
    2.How much will data storage cost?
    3.How long do we need to store backup data?
    4.Will an outage affect the Brand?
    5.Can the database or any part remain operational during backup and/or recovery?
    6.Will the root cause of failure be tracked with the strategy?

    There are three fundamentally different approaches to backing up Postgres data-

    1.SQL dump
    You can create a text file with SQL commands using Postgres utility program pg_dump. pg_dumpall is used to dump an entire database cluster in plain-text SQL format alongwith user, groups, and associated permissions
    CONS: Dumps created by pg_dump are internally consistent, that is, the dump represents a snapshot of the database as of the time pg_dump begins running. The utility does not block readers or writers.
    2.File system level backup (Cold Backup)
    An alternative backup strategy is to directly copy the files that Postgres uses to store the data in the database.
    CONS: The database server must be shut down in order to get a usable backup. File system backups only work for complete backup and restoration of an entire database cluster
    3.Continuous Archiving (Hot Backup)
    At all times, PostgreSQL maintains a Write Ahead Log (WAL) in the pg_xlog/ subdirectory of the cluster’s data directory. The log records every change made to the database’s data files. This log exists primarily for crash-safety purposes: if the system crashes, the database can be restored to consistency by “replaying” the log entries made since the last checkpoint. However, the existence of the log makes it possible to use a third strategy for backing up databases: we can combine a file-system-level backup with backup of the WAL files. If recovery is needed, we restore the file system backup and then replay from the backed-up WAL files to bring the system to a current state.

    PROS:

    • This allows the database to stay operational during backup and enables online file system backup of a database cluster
    • We do not need a perfectly consistent file system backup as the starting point. Any internal inconsistency in the backup will be corrected by log replay (this is not significantly different from what happens during crash recovery). So we do not need a file system snapshot capability, just tar or a similar archiving tool.
    • Since we can combine an indefinitely long sequence of WAL files for replay, continuous backup can be achieved simply by continuing to archive the WAL files. This is particularly valuable for large databases, where it might not be convenient to take a full backup frequently.
    • It is not necessary to replay the WAL entries all the way to the end. We could stop the replay at any point and have a consistent snapshot of the database as it was at that time. Thus, this technique supports point-in-time recovery: it is possible to restore the database to its state at any time since your base backup was taken.
    • If we continuously feed the series of WAL files to another machine that has been loaded with the same base backup file, we have a warm standby system: at any point we can bring up the second machine and it will have a nearly-current copy of the database.

    CONS:

    • As with the plain file-system-backup technique, this method can only support restoration of an entire database cluster, not a subset.
    • Also, it requires a lot of archival storage: the base backup might be bulky, and a busy system will generate many megabytes of WAL traffic that have to be archived. Still, it is the preferred backup technique in many situations where high reliability is needed.

    Requirements for Postgres PITR

    • A full/ Base backup (This section is WIP)
    • The following must be set in postgresql.conf:

    1.wal_level to archive or hot_standby (For details refer postgres official page)
    2.archive_mode to on
    3.archive_command that performs archiving only when a switch file exists and supports PITR. Eg.:
    archive_command = ‘test ! -f /var/lib/pgsql/backup_in_progress || (test ! -f /var/lib/pgsql/archive/%f && cp %p /var/lib/pgsql/archive/%f)’

“Maybe” Monad for Ruby

Lets start with a problem every Ruby on Rails developer faces, handling ‘nil’ value – tedious task.

NoMethodError: undefined method `[]’ for nil:NilClassy 4 spaces

But there is a well known design pattern to handle ‘nil’ value for Ruby in the more robust way. Like pure functional programming languages does, with the ‘Monad’ design pattern.

Monad is a design pattern which is used to describe expressions as a series of actions. Monad generally, wraps the datatype with some extra information. And a well known monad for handling ‘nil’ value is called ‘Maybe’ monad.

Maybe monad is a programming pattern that allows treating nil values in the same way as non-nil values. This is done by wrapping the value, which may or may not be nil to, a wrapper class.

There is already a gem called ‘possibly’ that handles ‘nil’ values as a special data type, ‘possibly’ is the implementation of Haskell’s Maybe Monad.

Working

Maybe(“I’m a value”)
#

Some – represent a non-nil value.
None – represent a nil value.

Maybe(nil)
=> #

Maybe is type constructor.

Maybe(“I’m a value”).is_some? => true
Maybe(“I’m a value”).is_none? => false
Maybe(nil).is_some? => false
Maybe(nil).is_none? => true
Maybe(“I’m a value”).get => “I’m a value”
Maybe(“I’m a value”).or_else { “No value” } => “I’m a value”
Maybe(nil).get => None::ValueExpectedException: `get` called to None. A value was expected.
Maybe(nil).or_else { “No value” } => “No value”
Maybe(“I’m a value”).or_raise => “I’m a value”
Maybe(nil).or_raise => None::ValueExpectedException: `or_raise` called to None. A value was expected.
Maybe(nil).or_raise(ArgumentError) => ArgumentError
Maybe(“I’m a value”).or_nil => “I’m a value”
Maybe([]).or_nil => nil

example methods to extract values out of the “Maybe” object. like

Maybe(“I’m a value”).is_some?
=> true

Handling Enumerable through “Maybe” Monad

Maybe(“Print me!”).each { |v| puts v } => it puts “Print me!”
Maybe(nil).each { |v| puts v } => puts nothing
Maybe(4).map { |v| Math.sqrt(v) } => #
Maybe(nil).map { |v| Math.sqrt(v) } => #
Maybe(2).inject(3) { |a, b| a + b } => 5
None().inject(3) { |a, b| a + b } => 3

consider an example:

Maybe(nil).map { |v| Math.sqrt(v) }
=>

more real world use case of latter example would be:

Maybe(nil).map { |v| Math.sqrt(v) } .or_else {‘Sorry’}
=> “Sorry”

Assume in your Rails app, @current_user variable is set when the user is logged in. The @current_user has one account which contains the user’s name. Your task is to print the name or “ name is not defined”.

In HAML,

– if @current_user && @current_user.account && @current_user.account.name.present?
= @current_user.account.name
– else
= “name is not defined”
– end

we can simplify this code, with the help of the ‘Maybe’.

= Maybe(@current_user)
.map { |user| user.account }
.map { |account| account.name }
.or_else { “name is not defined” }

Next example follows DRY principle, less lines of code, more readable.
One catch in working with the “Maybe”, is that you have to write ‘map’ calls so many times, there exits a different version of the latter code, without map calls.

= Maybe(@current_user).account.name.or_else { “Not logged in” }

tip: ‘a call to a method that is not defined in Maybe (i.e. is not is_some?, is_none?, get, or_else nor any Enumerable method) is treated as a map call’

Some more use cases in Rails

Imagine you have the following params hash in the controller action

(byebug) params

{transaction: {id: 1, booking: {id:1, default_address: 1, renter_shipping_info: {id: 1, delivery_timeslot_id:nil}
}}}

and you have to update the renter’s shipping info (delivery timeslot ), if delivery timeslot id exits.

def renter_shipping_info_update
renter_timeslots = Maybe(params[:transaction][:booking] [:renter_shipping_info][:delivery_timeslot_id]).map do |timeslot_id|

Timeslot.find(timeslot_id)
end

@current_user.delivery_timeslot = renter_timeslots
@current_user.update!
end

Explanation:

params is wrapper with Maybe and traversed till
‘:delivery_timeslot_id’.
If params[:transaction][:booking] [:renter_shipping_info][:delivery_timeslot_id] exists, then Maybe(params)[:transaction][:booking][:renter_shipping_info][:delivery_timeslot_id] returns ‘Some’ with the id as a value. Otherwise it returns None.

Strategies to Interact with Dynamic Web elements using Selenium

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. – Bill Gates

A lot of times when we automate a feature and run it for the first time, it gives us Green result, However, on the second run it gives us the Red results. And after further analysis, we get to know that web elements we are dealing with are dynamic in nature and It becomes quite challenging to interact with such dynamic elements. So, Lets discuss some techniques to handle such web elements while scripting the automation using using any automation tool, be it open-source like selenium webdriver or commercial like UFT/QTP.
What do we understand by Dynamic Web Element?

Dynamic element is that Web Element whose ID’s, actually not only ID’s it can be any attribute like Class Name , Value etc. are not fixed. It changes every time you reload the page. So, you cannot handle that element simply by locator.

For ex: Gmail Inbox elements, there class name gets changed on every login.

Dynamic elements are database-driven or session-driven. When you edit an element in a database, it changes a number of areas on the application under test. Dynamic elements are strictly content, with formatting being laid out in the design. Dynamic identifiers are generally used for text-boxes and buttons

If you are automating a dynamic website, the scripts will break as soon as the content changes which will cause your test to fail. Then you have to update your test case each and every time, which is a tiresome task.

We always need to understand the behavior of these elements as the page is reloaded or entered in new session. Once we understand it, we can devise a strategy to interact with these elements.
Some of the strategies are listed below:

Let’s see dynamic element example with tag as Button which has dynamic ID and Class name,
where ID is getting changed from ‘Hstpl-3465-text1’ to ‘Hstpl-4434-textE2’
and class name gets changed from “Hstpl-Class-text45” to “Hstpl-Class-text73” on every new session.

1. Relative Xpath with Starting Text:

Like partial link selector in selenium, we can also use Xpath search with starting Text match element. We can apply ‘starts-with’ function to access the element as shown below:

//button[starts-with(@id, ’Hstpl-’)]

2. Relative Xpath with Following or Preceding Node

The following include all the nodes that follow the context node. . We can apply ‘following’ to specify the following elements in the web element list.

Xpath:

//button [contains(@class, ‘Hstpl-Class’)] /following:: input[contains(@id,’Hstpl-’)]
//input [contains(@id,’Hstpl-’)] /preceding:: button[contains(@class, ‘Hstpl-Class’)]

3. Relative Xpath with Text Contains

Few dynamic elements contain static values, On the basis of those values, we can use ‘contains’ function to search such elements. For example, in above html button class name contains static string ‘Hstpl-12345’. We can use XPath to search for a button element with class name containing ‘Hstpl’.

Xpath:

//button[contains(@class, ‘Hstpl’)]

4. Relative Xpath with Multiple Attribute

We can also add more than one condition to search element using Xpath. Like button with ID that contains ‘Hstpl’ plus class that contains ‘text’.

Xpath:

//button[contains(@id,’Hstpl-’)] [contains(@class, ‘Hstpl-Class-text’)]

5. Element with Index

Some times there are more than one elements present in the DOM with similar attributes and it becomes difficult to search them when they are dynamic in nature. For example, there are 10 buttons on page and you want to locate 5th button. Then we search elements with ‘button’ tag and navigate to 5th index of list of buttons to get that element:

Driver.findElements(By.tag(‘button’))[4]

If hierarchical level doesn’t get change, then this would be one method to trace dynamic element.

6. Absolute Xpath Method

Absolute Xpath method uses complete path from the Root Element to the particular element. An absolute Xpath starts with html and forward slash (/) as shown below. You can use firepath (firebug) to generate Xpaths. They are prone to more regression as slight modification in DOM makes them incorrect or refer to a different element. Normally it’s not considered as a best practice to use absolute Xpath, However it solves the Dynamic element problem.

XPath:

/html/body/div[5]/div[4]/div/div/div[6]/div/div[3]/div/button

7. Using IWebElement Interface.

One more way to handle dynamic element is to find all elements with Tag name and search required element by contains text value or element attributes. For example to search button with specific text you can use below code:

IList webElement = BrowserDriver.GetDriver().FindElements(By. TagName(‘button’));

IWebElement element1 = webElement.First(element => element.Text == “Hstpl”);

element1.Click();

** Note: IWebElement interface is used to interact with both visible and invisible elements present on a page.