Microsoft Excel is spreadsheet software that has been a standard on the market of reporting tools for many years. Itâs designed to assist you in summarizing and analyzing data or keeping it in tabular format. Excel is also loved for its diversity of mathematical formulas for every situation. They come in handy when you need to solve a complex mathematical problem.
Excel is one of the most widely used reporting software in the business world. A lot of small and large companies use it for different data analysis purposes.
But before investing in licensed software, you can try other solutions that offer more flexibility and allow visualizing data right in your app.
Today weâll research on the features of two tools for reporting – WebDataRocks and Excel and compare the ways they provide reporting experience.
We hope this article will help you to pick what is right for you.
Excel strikes with the available functionality related to data visualization. But once you are taken to its workspace, itâs easy to get lost among the variety of tabs and bars. Figuring out the purposes of each one may be overwhelming for a newcomer. It often takes a lot of time before you learn how to navigate swiftly and efficiently. In general, the learning curve for adapting to Excel can be described as steep. You need to invest a lot of time to learn how to do reporting rather than extracting value from the data.
Performance is always a key consideration in any data analysis and reporting process.
Excel ensures fast data access and processing because of the way it manipulates memory. When you are working with spreadsheets, an entire file is loaded into RAM. On the one hand, this approach speeds up data access. On the other hand, the whole document has to be reloaded into RAM again once you make any small change to your current workbook.
Thus, working with spreadsheets that contain large amounts of data (~ 200k records) can be rather slow and not efficient.
If you are willing to change some parts of the interface, you can personalize the Ribbon, Quick Access Toolbar, change the language, etc. In this sense, Excel is customizable enough.
Excel works with almost every other product of Office 365. You can add spreadsheets to presentations or text reports.
But to move data between Excel and any third-party app, you need to use external services since there are no off-the-shelf integration options available.
Not only can you use Excel as a desktop app but also in the web browser. Pay attention to the fact that there are certain variations in the functionality of both versions. However, itâs not possible to embed Excel into a web application.
Shared workbooks had many limitations previously but now you can use Excel for collaborative work with the help of the `Co-authoring` feature. Provided that you have a cloud-based version of Excel (as part of an Office 365 subscription), your colleagues can have remote access to your spreadsheets and see updates nearly in real-time. Without this subscription, another way to send a report is via an e-mail or a portable storage media which leads to duplicating information and is a neither secure nor convenient way to share the data.
Many companies use Excel as a database. Despite certain similarities between spreadsheets and databases, Excel lacks the capabilities of a database management system. Spreadsheets canât meet the requirements on data integrity, consistency, security, ease of updating and retrieval of data. Moreover, as mentioned before, multiple users canât access and update such a kind of database at the same time if they don’t have âCo-authoringâ feature enabled.
Considering all, itâs more convenient and reliable to keep your data in relational databases rather than store it in spreadsheets.
Excel is primarily known for its pivot tables – the most indispensable tool when it comes to summarizing the data and getting insights from it.
But be aware that adding pivot tables may affect the performance – they double increase the workbookâs size because of the large caches they require.
Excel is not expensive for companies but can be costly for individual users.
Many people use Excel because itâs comfortable and they get used to it.
But why not break out of your comfort zone and move towards web-based solutions? Especially if you have exhausted the possibilities of Excelâs pivot tables.
It’s worth to note that using web pivot table components instead of Excel pivot tables implies going to a programmatic approach. The advantage of such an approach is that you can embed the component into any app, fully control the access to it and tailor pretty much every aspect of your reporting experience.
And this is where WebDataRocks comes into play.
WebDataRocks is a client-side pivot table component designed to provide data analysts with summarized data in the form of a tabular report. You can easily embed this tool in your web application and empower your end-users with reporting.
Letâs see what it has to offer.
To make your learning curve less steep, we developed a highly intuitive interface that enables you to concentrate solely on the reporting purposes without being distracted by redundant functionality. All vital features are at hand, most of them are accessible via the Toolbar.
You can customize WebDataRocks according to the style of a particular application, add or remove tabs to/from the Toolbar, and leave only the features which you need. You can easily switch between different layouts, apply predefined themes, add icons to the cells, and change everything through the UI or in code to make the component look natively in your app.
Additionally, you can bring to life your ideas about design and create a custom report theme.
The previously mentioned customization options are what make WebDataRocks a flexible reporting tool that ideally suits Agile practices. In particular, it adapts to such Agile principles as continual improvement and flexible response to changes.
As a developer, you can experience the freedom of integration with popular frameworks such as React, Angular, AngularJS, Vue, and more.Â
To facilitate the integration process, you can make use of the available tutorials:Â
Besides, you can follow an API-first approach with a lot of API calls that make interacting with the tool easier and allow building your scenarios around the component.
Works perfectly on all browsers and mobile platforms. Looks natively in any app.
In Excel, you are limited to use default charts and apply a set of defined styles to them. Of course, you can install some add-ons of 3rd-party charting libraries but it takes time to figure out how to use them in Excelâs workspace.
Here is how we solved this problem in WebDataRocks – you can integrate the pivot table component with any charting library you like. All you need to do is to pivot your data and pass it to any kind of chart. To make things easier, we prepared a series of tutorials on integration with the most popular charting libraries. If your favorite library is not on the list, you can make use of webdatarocks.getData()
method to preprocess the data accordingly to the format the chart requires.
With little or no IT help, you can set up a powerful data visualization tool in no time. The Quick start guide will walk you through the basics of working with the component.
We know how teamwork matters: it provides the company with a diversity of thoughts and perspectives. After installing your instance of a pivot table tool, you can give role-based online access to analytics inside the application. Moreover, you can customize it to make a read-only mode of the pivot table.
We protect you from losing your working results.
Save the report in a JSON file and restore it anytime.
The exporting of reports is meant to be as fast as possible – WebDataRocks comes with exporting functionality bundled in which means there is no need to install custom plugins for exporting to PDF, HTML, and, of course, Excel. You can even send your report for printing. If you need to send the report to a server, itâs not a problem at all as well. To make that possible, you can customize exporting configurations with a few lines of code.
You can load up to the 1MB of data.
Free.
You can get WebDataRocks at no cost and save your money for other business investments.
To help you make the best of both Excel and WebDataRocks, we made a transition from Excel to WebDataRocks as smooth and simple as possible.
You can load your CSV data into the pivot table component and start pivoting your data in a new web-based interface.
Besides, if you feel you miss the spreadsheets and want to see the raw data, you can always switch to the flat mode and view non-aggregated data in a row-by-row manner.
Today we reviewed the capabilities of a desktop reporting software (Excel) and a web-based pivot component (WebDataRocks). While the former is a great example of a standalone application, the latter can be successfully used as an integrated component of a larger Business Intelligence platform.
The choice is yours.
Searching for ways to improve your programming skills? Try GitConnected – here you can find a rich collection of the most up-to-date coding tutorials.
In the final part of the data visualization project, weâll discuss the charts that visualize the distribution of univariate and bivariate data.
A histogram is the most commonly used plot type for visualizing distribution. It shows the frequency of values in data by grouping it into equal-sized intervals or classes (so-called bins). In such a way, it gives you an idea about the approximate probability distribution of your quantitative data.
Structure
The histogram is composed of vertical or horizontal bars. The height of each bar corresponds to the frequency of values that fall into this bin. By changing the bin width, you also change the number of bins â this will affect the shape of a distribution.
Purpose
To visually represent the distribution of univariate data. Additionally, with the histogram, you can figure out information about the center, spread, skewness of data as well as the extreme values, missing or non-typical values (outliers). In addition, you can check whether the data has multiple modes.
One should not confuse histograms with bar or column charts â though these graphs are alike, they play totally different roles in data visualization:
Example
The distribution of the countryâs population:
A box and whisker plot is one of the most popular charts when it comes to statistical analysis of data distribution.
Structure
A box contains three important numbers: the first quartile, median, and third quartile. The other two numbers are the minimum and maximum â these are represented by whiskers.
These five numbers divide the dataset into sections. Each section contains around 25% of the data.
Example
Today youâve learned more about charts that can be used for visualizing data distribution. We encourage you to learn by doing and try creating such charts in your data analysis project.
Eager to learn about other chart types? You are welcome to read the previous blog posts of the data visualization project:
Collecting high-quality data is a fundamental prerequisite for starting any data analysis or machine learning project.
However, you may notice that looking for a really thought-provoking dataset can be a burdensome process and usually entails spending a lot of time. So as to save your precious time for deriving insights from the data, WebDataRocks team prepared a carefully selected list of free repositories with real-world data, ready to boost your project.
Letâs start exploring them!
Contents:
One of the largest and most powerful search engines, which hosts thousands of datasets on the topic of finance, infrastructure, transportation, environment, economy, and public safety. What is more, all the datasets are categorized by use of machine learning algorithms, which makes this platform even more intriguing.
Try digging deeper to find here the most challenging datasets for your work.
Developers may find useful the fact that Socrata OpenData exposes the Discovery API which presents a mighty way for getting access to all the public data from the platform. Another great feature for developers is the fact that API calls return nested JSON objects which are easy to understand and parse.
On top of that, there are a lot of examples of data visualization and short tutorials which allow you to explore data interactively with charts. Here you can also find wrappers for accessing features of Socrata OpenData from various server-side languages.
If you want to become a contributor, read the Publisher guide to know how to upload your data.
Literally, Kaggle is the greatest data science platform and community which impresses with a diversity of datasets, competitions, examples of data science projects. Apart from educational purposes, it gives a chance to win financial rewards in competitions, hosted by the leading companies which yearn for understanding their data better.Â
But competitions are more about journeys to the data science realm rather than winning the first places. You should definitely bring all the available opportunities into play to master the skills required for your career as a data scientist.
It should be noted that this resource contains mainly cleaned data, especially if itâs a part of a competitionâs kernel. Datasets can be searched by different tags.
To experience a competitive and challenging environment and test your strengths, you can try participating in the following open competitions:
Or build visualizations and ML models around these datasets:
Keen on data-driven articles and graphics created by writers of FiveThirtyEight blog? Have a peek into the data that is at the heart of visualizations. You can download the data from this online collection of data or the GitHub repository. Also, you can navigate right up to the journalistic article where it was used.
Most of the visualizations that you can find here are interactive. And we encourage you to create your own variant of the analysis and visualization.
A comprehensive platform that hosts datasets for machine learning tasks for many years. This is a classic place to start your machine learning path which is supported by the National Science Foundation. Every dataset is well-described – you can check its default task, attribute types, data types, and other features. Many of the datasets are quite small but still great for educational projects.
This American nonprofit organization is recognized for noteworthy investigative journalism. But itâs also known for offering a versatile data repository that covers health, criminal justice, education, politics, business, transportation, and finance topics. Besides, itâs frequently updated.
The collection contains both paid and free datasets. Paid datasets, in turn, are available under academic, commercial, students and journalist licenses.
ProPublica also cares for the ways to access data by exposing five APIs which simplify retrieving data.
Have been waiting for the opportunity to create your project but didnât know how to start?
Then you canât miss a perfect chance to improve your research and analysis skills on Yelp – one more platform that provides ready-to-use data and encourages both newcomers and skilled data scientists to solve problems.
Not only can you participate in the challenges but also win cash prizes.
After downloading and playing with the data, as the next step, you can submit your project by filling the application form. It can be presented in any format – a paper, video presentation, website, blog, etc – anything that confirms your using of the data.
Do not pass by this place – itâs not only for students. Feel free to participate in challenges and discover your hidden talents.
A data service created and maintained by Airbnb company. It hosts a unique collection of the Airbnbâs data which is categorized by regions and countries. You can browse data for your particular city and explore insightful reports with creative visualizations. But we recommend getting the data and exploring it deeper with your favorite tools.
Being an open community for developers, data.world is a real treasure for everyone who is passionate about data analysis. More than 450 datasets for all tastes and purposes are freely available in the collection. Most of them are close to the modern world and, henceforth, require cleaning. Since cleaning data is an important stage of any data science project, here you are given an opportunity to practice these skills.
Datasets cover finance, crime, economy, education, census, education, environment, energy, sports, NASA, and a lot more topics.
Besides, you can even contribute your own data.
Signing up is a piece of cake – just use your GitHub account to register and get access to all the datasets.
Working with data is easy as well – you can write SQL queries through the site interface, use SDKs for Python or R or simply download the data file.
A rich data catalog containing datasets on various topics: economic, climate, education, logistics, healthcare, and more. On the datasetâs page, there are embedded visualizations built with Plotly, which give you a quick overview of data trends.
If you canât find data you are looking for, you can even make a free request for it.
You will be impressed by a variety of means to integrate the dataset into the tool you are using. There are code snippets that show how to use data with R, Pandas, Python, JavaScript, cURL, and data-cli. Also, you can simply download CSV or JSON datasets.
It positions itself as a not to be missed platform with financial and economic data that help power data-driven strategies. Here you can find free and pre-paid datasets. For data retrieval, Quandl provides a free-to-use API which acts as a single interface. Also, you can access data from Python, R, Ruby with the help of modules and packages. The Add-In for Excel is available as well.
Enthusiastic about space-related projects?
Then this repository is a real find for you. It contains Astrophysics, Heliophysics, Solar System Exploration data, and Image Resources.
Surprised to see Wikipedia on the list? Yes, you can use it not only for educational purposes. Wikipedia also offers ways of downloading and querying data. You can read more about them in this guide.
A huge repository which provides free access to global development data. You can search datasets by countries, regions and economic or demographic indicators.
With the help of online data visualization tools, you can explore data interactively using charts, tables, maps, build reports in no time, style them, share and embed. Datasets are available as CSV, XML and Excel files.
A repository of public datasets from US government agencies. The datasets related to climate, consumers, education, ecosystems, energy, finance, manufacturing, science are at your fingertips.
Datasets are available for public use but sometimes you have to agree to license agreements before downloading and using data.
Another great thing is that you can submit data stories to share with the world your ways of using data. There are also a lot of challenges you can participate in.
Pew Research Center is known for publishing survey reports and various kinds of analysis. Its researchers make datasets that lie at the core of reports available to the public. Many of datasets are provided as .sav files, therefore, you should know how to use SPSS or R. With them you can discover religious, political, social, journalistic and media trends.
Dataset Search is a powerful search engine that exposes a convenient interface through which you can access millions of datasets from around the world. This relatively new product launched by Google has been already favored by scientists, data journalists, and students who need to find scientific, social, environmental, or government data. A huge plus is that the volumes of the data are growing fast.
After querying data, youâll see the list of repositories, including academic ones, from which you can download it.
If you want to publish your data, follow these quality and technical guidelines which help understand how to describe uploaded datasets.
In general, Google Dataset Search copes well with the goal of making data more accessible to everyone.
But what if you want to practice analyzing big data?
Visit the Cloud Public Datasets Program catalog to find large and amazing datasets. All of them are stored in BigQuery and can be accessed through the Cloud Public Datasets Program. Though you need to pay for the queries that you perform on the data, you can make use of the first 1 TB of free queries.
You can search datasets from the Amazon Web Services platform through the Registry of Open Data. Datasets are available in the public domain. Here you can also find a lot of fascinating usage cases which may inspire you for starting new scientific or enterprise projects. They cover details on using data by organizations, implementing recommended systems, predicting stock prices, etc.
Besides, you can make your personal contribution by sharing data on AWS.
To start working with data, simply download it or get access from the cloud with the help of EC2 or Hadoop.
A distributed system that contains more than 45 TB of data for research. Pay attention to license terms – most datasets are allowed to be used for non-commercial and educational purposes.
Here is the list of some popular datasets:
You are more than welcome to explore the above-mentioned collections of data. To get an even more complete list of amazing datasets, we recommend referring to this GitHub page.
We do hope youâll find your perfect dataset for conducting data-driven researches and satisfying your intellectual curiosity about trends in certain areas of our diverse life.
Good luck with your data analysis and machine learning projects!
To extract the value from your data, you can try visualizing it with WebDataRocks Pivot Table and a charting library of your choice. Here are the tutorials which will help you get started:
To advance your programming and analytical skills, we recommend searching for courses and tutorials on GitConnected.
In WebDataRocks, we care about your reporting experience and make every effort to deliver value to you on a daily basis.
In the light of the new release, we are delighted to introduce to you new awesome aggregation functions and report themes enhancements.
Starting from version 1.1.0, you can calculate the following measures right in the pivot table component:
Letâs figure out why these aggregations may become handy for your reporting and data analysis.
As well as average, the median is extremely important in the statistical analysis since it represents the middle of a dataset. Unlike the mean (average), itâs less affected by outliers and skewed data.
The standard deviation is another important statistic (parameter) that helps to measure the spread of data distribution.
We implemented two versions of this aggregation – population and sample standard deviations.
One more aggregation you can benefit from is the running total aka cumulative sum. Itâs simply a total which is changed every time a new value is added to the list of numbers. A nice thing about the running total is that you can check the total at any moment without having to sum the whole sequence every time.
The appearance of the reporting tool is no less important than its functionality.Â
Therefore, we made minor improvements to lightblue
and teal
report themes and fixed issues with colors so as to make your pivot table stand out from the rest.
You can try all the themes in our demo.Â
Explore our demos to try new aggregations ?
Or install a new version of your favorite reporting tool via npm âš
Thank you for entrusting your data analysis and reporting to WebDataRocks. We do our best to help you turn your raw data into rich and comprehensible reports.Â
There is much more to come.Â
Stay tuned to new updates!
Follow us on Twitter to keep track of the latest developments and tips & tricks on data visualization.
Subscribe to our CodePen profile to be the first to know about new mind-blowing demos.
Correlation is a powerful statistical concept that refers to a linear relationship between variables. It lies in the center of regression analysis techniques.
And when it comes to visualizing relationships between variables, you cannot avoid using charts. They are a great assistance in assessing the quality of predictive regression models.
Charts that show correlation are used at the first step toward detection of cause-effect relationships (but one should remember that correlation doesnât always imply causation).
In this article, weâll to cover the purpose and the structure of two basic charts – a scatter plot and bubble chart.
A classical chart for any statistician when it comes to correlation and distribution analysis. Itâs perfect for searching distribution trends in data.
Structure
The variable on the y-axis is a dependent variable while the x-axis variable – independent.
Purpose
Use it to check whether there is any relationship between two variables. The presence of a certain kind of relationship simply means that changes in the independent variable lead to changes in values of the dependent variable.
With this chart, you can also notice anomalies or clusters in data.
Recommendations
Example
Check the relationship between the spent amount of hours studied and final grades results
If data points are scattered in a random pattern or form a curve, that means that there is no correlation. However, itâs possible that there is a non-linear relationship between variables.
A bubble chart is simply a variation of a scatter chart.
Purpose
Use it to identify the relationship between data points.
The bubble chart is essential for visualizing the 3- or 4-dimensional data on the plane.
Structure
The x-axis corresponds to an independent variable, the y-axis – to a dependent. The third and fourth variables can be represented by the size of a data point and its color. The size should be proportional to the value of the dependent variable and the color should correspond to a certain category.
Recommendations
Examples
Today weâve discussed the charts which are widely used in predictive analytics.
We aim to share with you the most important information related to data visualization.
To deepen your knowledge about charts, check out other parts of the data visualization project:
If you want to visualize aggregated data in charts, you can integrate WebDataRocks Pivot Table with Google Charts or Highcharts:
Recently we discussed which charts are best suited for comparison over time.
Today weâll shed light upon the main ideas behind a data composition.
The idea of this kind of visualization lies in helping understand how individual parts comprise a whole. With data composition, itâs easy to focus attention on the importance of each part with regard to the total value.
Though most charts show the relative value, you can use some of them to compare absolute values.
Letâs move on to the charts that are best for a part-to-whole analysis.
All these charts can be divided into two categories:
A stacked area chart works the same way as an area chart but allows comparing multiple data series by placing values of groups on top of each other. It works best if you want to show relative and absolute differences between categories.
Purpose
Use it for tracking changes of a total value across different categories over a period of time.
Recommendations
Example
Measure the companyâs performance over the years:
Compare with a version with overlapped plots:
The second one is neater and easier to read, isnât it?
You may have used a bar chart for comparing the changes in values over time or across the categories but for displaying the size differences for parts of a whole itâs better to take advantage of stacked bar charts. In this chart, the bars are placed on top of each other.
Purpose
Use it to show how a larger category is divided into subcategories and how each subcategory contributes to the total amount. In case you need to show the percentage of each value, use 100% stacked bar graph instead.
Recommendations
Examples
Over time comparison of sales of books divided by genres:
A pie chart is one of the most widely used types of visualization. The first mention of the pie chart is attributed to William Playfair in 1801. Though many people criticize pie charts, they play an important role when it comes to understanding the parts of the whole.
Purpose
Use it to show proportions or percentages of categories in the form of a circle divided into segments. The value of each category is proportional to the corresponding length of the arc.
Recommendations
To make the chart comprehensible, follow these basic rules:
Example
Visualizing the categories which bring the most traffic:
A doughnut chart is a variation of the pie chart with a round hole in the center. It has the same purpose as a pie chart does but its special feature is a capability to contain supplementary information which can improve the readability of the chart.
Example
Shares of sales by countries:
A treemap chart looks unpretentious yet itâs powerful. Thanks to their compactness, treemaps help to illustrate a large amount of data in a hierarchical view.
Purpose
Use it to visualize the hierarchical data in a form of clustered rectangles which together represent the whole. Each node (or group) is represented by a rectangle, which area is proportional to the value. Colors help to add dimensions such as groups and subgroups.
Recommendations
Examples
This treemap shows the number of the world population. You can drill through to know the levels of a population of each hierarchy and subhierarchy.
A funnel chart is a type of chart that depicts the decrease at each stage in a process. In other words, it illustrates how many items are lost at each stage of a particular process. This is why itâs widely used in marketing and sales. The most popular use case is the analysis of conversions.
If thereâs an increase at each stage, a funnel chart turns into a pyramid chart.
Purpose
Use it to identify bottlenecks in a workflow by ordering the entire process by stages.
Example
Conversion of website users into customers:
A waterfall chart shows how the starting value is increased or decreased by a sequence of intermediate values that lead to the total value. Such a process is called cumulative. This type of chart is commonly used in finance domain.
Purpose
Use it to understand how the value is gradually changed over time or across categories.
Example
We do hope you enjoyed this overview of charts and recommendations. Now you are ready to pick the right type of visualization to make your data analysis be ahead of the curve.
Thank you for reading!
To help your audience get insights from your report, the best practice is to place accents on the most important chunks of information in it. Highlighting data makes your report more efficient and easier to perceive.
Today weâll show how to highlight trends in your data by substituting individual values with custom icons.
Our reporting goal is to identify successful and unsuccessful months in terms of revenue.
To see the results right away, you can scroll down to the CodePen demo at the end of the tutorial.
Letâs head over to practice!
Following the steps from the Quick start guide, embed a pivot table component on the web page.
Now itâs time to fill the pivot table with the data.
Connect to your data source (CSV or JSON) by specifying a URL to the data file or defining a function which returns the data.
Here is how you can define the data provider:
function getData() { return [{ "Country": "Spain", "Revenue": 33297, "Date": "2018-02-21T08:05:23.683Z" }, { "Country": "France", "Revenue": 232333, "Date": "2018-02-21T08:05:23.683Z" }, { "Country": "France", "Revenue": 66233, "Date": "2018-03-21T08:05:23.683Z" }, { "Country": "Spain", "Revenue": 27356, "Date": "2018-03-02T18:57:56.640Z" } ] }
Now tell the pivot table where to get the data by specifying the function in the dataSource
attribute of report
:
dataSource: { data: getData() }
Define which hierarchies you need to see on the grid. Put them to the rows and the columns. Choose fields for the measures and apply aggregation to them.
In our sample, weâll put “Date.Month” to the rows, “Country” to the columns and “Revenue” to the measures.
"slice": { "rows": [{ "uniqueName": "Date.Month" }], "columns": [{ "uniqueName": "Measures" }, { "uniqueName": "Country" } ], "measures": [{ "uniqueName": "Revenue", "aggregation": "sum" }] }
Also, you can apply filters now or later via the UI.
Now that your report is ready, itâs time to emphasize the numbers that speak most.
Our main helper will be customizeCell. With this powerful API call, you can easily change the style and content of any cell on the grid.
Here is how you can define customizeCellFunction
:
function customizeCellFunction(cell, data) { if (data.type == "value" && !data.isDrillThrough && data.isGrandTotalColumn) { if (data.value < 200000) { cell.text = "<img src='https://static.webdatarocks.com/uploads/2019/02/21213347/icons8-decline-64-1.png' class='centered'>"; } else if (data.value >= 200000) { cell.text = "<img src='https://static.webdatarocks.com/uploads/2019/02/21213348/icons8-account-64.png' class='centered'>"; } } }
What does customizeCellFunction
do? This piece of code iterates over each cell and replaces the content of grand totals with an appropriate icon depending on its value. In a similar way, you can customize any CSS styles of the cells.
To make the icon fit the cell, apply the following CSS styles for the image:
img.centered { margin: auto !important; padding-bottom: 10px; color: transparent !important; width: 22px; height: 22px; display: flex; align-items: center; font-size: 12px; position: relative; bottom: 4px; left: 6px; }
Here is how your pivot table looks now:
Now you can export your report to HTML and send it to your teammates!
Play with the code on CodePen?
If you want to customize your pivot table more, we suggest reading the following articles:
The icons used in this tutorial are designed by Freepik from www.flaticon.com.
JSON stands for JavaScript Object Notation. Itâs a syntax that allows storing information in a self-explanatory and readable manner. Any JSON data can be converted into JavaScript objects. And vice versa: you can convert a JavaScript object into JSON before sending it to a server. This is what makes it so powerful and irreplaceable for fast server-to-browser communication.
Itâs favored by developers for its comprehensible structure. A JSON object contains key-value pairs which are called properties. The properties are separated by commas. You can access any property by referring to its name. Each object is surrounded by curly braces.
To work with WebDataRocks, you need to pass an array of JSON objects to the reporting tool.
But before loading your JSON data into the pivot table, you should make sure itâs valid.
To prevent any issues with your data, we recommend using JSON validators – special online web services that allow you to load your JSON and check whether it contains any errors. Therefore, weâve carefully prepared for you an overview of the best JSON validators.
Instead of scrolling hundreds or thousands of lines of code so as find an error, use them. Not only they save a great amount of your time but also format your dataset accurately.
JSONLint is one of the most popular free validators and formatters for JSON data. Itâs incredibly simple to use.
To validate your data, follow these steps:
Now your data is ready to be loaded into a pivot grid.
This free and powerful tool helps you validate, beautify and minify your JSON data. Along with validating functionality, it comes with extra features: you can set indentation levels, print the formatted data, convert it to CSV, YAML or XML formats and download the data into a file. Also, itâs a great option if you have nested JSON objects as it enables you to see the tree view of your data. We tested this validator on browser compatibility – everything works as expected in Chrome, Safari, Edge and Firefox browsers.
How to validate your data:
Preparing your data is the most important step on the way to successful data analysis. We hope these tools will make working with your data a trouble-free process.
When you use static reports to make up a baseline for your data analysis you should always keep in mind that they show only the static view of your data. To take a look at it in real time, the majority of data analysts opt for dashboards. The reason for that is because they provide more opportunities for exploring the data interactively, meaning you can always go deeper into the details or get the big picture.
As a developer, one day you may receive a task to configure a dashboard and embed it into the business application. After youâve found suitable tools, a lot of heavy-lifting may be involved in the integration process: compatibility with the software stack, customization challenges, overall performance, and, of course, budget limitations.
Also, since dashboards are powered by data which often comes from multiple sources, end-users should be able to connect it to the dashboard, switch between data sources and present the data in a consumable form as a result. The whole process needs to be completely time-saving.
Thatâs where free JavaScript libraries come to the rescue.
WebDataRocks is a reporting tool that allows aggregating, filtering and sorting the data.
Google Charts is a data visualization library that provides with a wide range of charts and graphs.
With their help, you can build a low-cost yet effective solution for any analytical project with any front-end technology including Angular, AngularJS, jQuery and React. Both tools are lightweight and extremely customizable.
Today we will focus on creating a fully personalized dashboard for monitoring sales metrics. If you are eager to get hands-on experience, scroll down to the section with useful links and run the demo. It will be enriched with the dynamic report and responsive charts. We truly hope this tutorial will help you get a better understanding of how to combine the analytical features of a pivot table and the expressive power of charts.
Ready?
Letâs start!
What is a data analysis without a purpose? Setting your goals from the very beginning is a milestone on the road to successful analysis. Itâs better to have a clear idea in mind of what your end-users need to achieve in the long run.
For an illustrative example, letâs analyze the data of a fictional supply chain to understand its sales performance on a monthly basis as well as customer demographics.
A pivot table is an engine of a dashboard as it takes all your raw data and transforms into the summarized form.
Firstly, establish all the dependencies of WebDataRocks and include its styles into the <head>
of your web page:
<link href="https://cdn.webdatarocks.com/latest/webdatarocks.min.css" rel="stylesheet"/>
<script src="https://cdn.webdatarocks.com/latest/webdatarocks.toolbar.min.js"></script>
<script src="https://cdn.webdatarocks.com/latest/webdatarocks.js"></script>
Secondly, create a container which will hold a pivot table:
<div id="wdr-component"></div>
Thirdly, add the component to your web page:
var pivot = new WebDataRocks({ container: "#wdr-component", toolbar: true });
Pull the CSV or JSON data from your database, CRM or any other data source.
You have three equivalent options of loading it into the pivot table:
"dataSource": { "dataSourceType": "json", "filename": "URL-to-your-data" }
"dataSource": { "data": getJSONData() }
Firstly, set up a slice – itâs the most important part of the report where you specify which hierarchies to place into the rows and columns, apply various filtering and pick aggregation functions for your measures. Â
To extend the measuring capabilities, calculated values are at your disposal:
Please refer to the source code at the end of the article to learn how to set a slice.
Now that you aggregated data and displayed it on the grid, connect to the Google Charts library by including its Visualization API loader into the <head>
of your web page:
<script src="https://www.gstatic.com/charts/loader.js"></script>
After that, load the package for particular types of charts (either in the <head>
or <body>
section):
google.charts.load('current', {
   'packages': ['corechart']
});
Also, you need to add the WebDataRocks connector for Google Charts that handles all the data pre-processing for the specific types of charts:
<script src="https://cdn.webdatarocks.com/latest/webdatarocks.googlecharts.js"></script>
Lastly, add containers for the charts:
<div id="combo-chart-container"></div> <div id="pie-chart-container"></div> <div id="bar-chart-container"></div>
To make the charts show the data from the table, you need to attach an event handler to the pivot table for the reportcomplete
event.
reportcomplete: function() { pivot.off("reportcomplete"); pivotTableReportComplete = true; createComboChart(); createPieChart(); createBarChart(); }
To track the pivotâs and Google Charts readiness, add two flag variables:
var pivotTableReportComplete = false;
var googleChartsLoaded = false;
Also, set the onGoogleChartsLoaded()
function as a callback to run after the “
google.charts.load('current', { 'packages': ['corechart'] }); google.charts.setOnLoadCallback(onGoogleChartsLoaded);
onGoogleChartsLoaded()
is responsible for getting the data from the pivot and starting the functions that instantiate our charts:
function onGoogleChartsLoaded() { googleChartsLoaded = true; if (pivotTableReportComplete) { createComboChart(); createPieChart(); createBarChart(); } } function createComboChart() { if (googleChartsLoaded) { pivot.googlecharts.getData({ type: "column" }, drawComboChart, drawComboChart); } } function drawComboChart(_data) { var data = google.visualization.arrayToDataTable(_data.data); var options = { title: _data.options.title, height: 300, legend: { position: 'top' } }; var chart = new google.visualization.ComboChart(document.getElementById('combo-chart-container')); chart.draw(data, options); }
Similarly, you need to define the functions for a pie and a bar chart.
In the definition of each function, specify the chart types as the input arguments for
As the second and the third arguments, pass the functions that act as a callback and update handlers, meaning they are called once the data is loaded into the pivot table or updated. This way we make the chart react to the tiniest changes in our report.
Also, donât forget to set the charts options to make them look awesome.
Letâs make our dashboard even more expressive and customize the grid cells based on the values.
With the customizeCell
function, weâll redefine the style and content of the cells with totals.
Now we can clearly observe the trends in our sales data, canât we?
Summarizing all the steps, today youâve learned in practice how to build a sales dashboard with minimum code, maximum flexibility, and effectiveness. Now you can share it with your team and help them reveal deeper insights from the data.
See a complete code in the live demo.
You can always tune the report and get the new view of your data in real-time – simply drag and drop the hierarchies to switch them, expand or collapse their members and drill through the records to see the raw data and reveal more insights. Feel free to add the report filters to focus on the months in which you are interested the most.
It is well known that digital marketing is an irreplaceable way to connect and communicate online with potential customers of your business with the help of different channels, both free and paid. Its set of tactics comprises diverse tools such as search engine marketing, email marketing, posting content to social media, remarketing, which help stay engaged with prospects and much more. When combined together, they are called integrated marketing methods and help achieve the most sophisticated goals.
According to Gartnerâs Digital Marketing Spend report, marketers invest 25% of marketing expense budget in digital marketing methods as they remain the most effective tactics to convert traffic into leads, subscribers, and sales.
To make the investments profitable, the first and foremost rule for a marketer is to plan the campaign thoroughly and analyze results afterward. Only thus can the efforts be rewarding.
Data is the heart of any analysis. And marketing analysis is no exception.
There are plenty of ways of storing marketing data and pulling it out – from databases to analytical platforms. Not all of them are suitable for reporting and data analysis, though. By blending the data from various sources, you can collect a versatile dataset to work with.
For you, as a marketer, keeping the analytics in one place is vital. Having finished your campaign and collected enough data, itâs the best time to put your raw data to work. Thatâs exactly where a pivot table may stand you in good stead.
Recently weâve sorted out the structure and purpose of the pivot table as a powerful tool for web reporting. Now it is the best time to find out how it may come in handy for shaping your marketing strategy.
Today weâll analyze and measure paid marketing campaigns: track their results and evaluate the performance so as to maximize the return on the investment of time and money.
For this, we should take a look at trends in the engagement metrics (such as conversion rates, sessions, bounce rates) and the overall amount of leads, opportunities, and sales over a certain time period (letâs say, six months).
When you think of metrics, naturally you imagine a performance dashboard. Thatâs why weâll try a mixed technique that lies in combining reporting features of a pivot table and visualization capabilities of charts.
Methods of calculating these metrics depend greatly on the specifics of the industry youâre working in, thatâs why your formulas may differ from ours. The purpose of this article is to show how you can take advantage of pivot table capabilities for effective analysis of various metrics.
Before the start, we should know what results we want to achieve in the long run.
Our far-reaching goal is to optimize leads-opportunities and opportunities-sales conversions.
Thatâs why weâll pay particular attention to amounts of leads, opportunities, and sales generated by channels and campaigns. Also, weâll calculate ROI, CPL, CPO, and CPC as these metrics are the foundation of measuring how successful the campaign was. ROI is of special importance – it shows how profitable the marketing investment was.
Weâll show you how to analyze real live data and provide you with the link to the demo at the end of the article.
Letâs master the process of creating reports together.
Letâs connect to a CSV/JSON data file that has been previously exported from the platform youâre using for marketing.
The fields from our dataset are represented as the hierarchies in the rows, columns, and measures. Particularly, we have the following fields:
Letâs take a quick glance at the structure of the data in a flat view:
To get a complete picture of our analysis, letâs add our own metrics by using the calculated values. Simply define the formulas for these measures in the Field List.
Organize the data on the grid and aggregate it via the Field List or by defining a slice in code. Here is the result:
Note that we’ve added the “Month” level of the “Date” hierarchy to the report filters so as to be able to evaluate metrics over specific months.
To tailor the report, highlight the cells with conditional formatting based on their values:
Itâs time to perform a campaigns analysis. Letâs focus on three types of campaigns: Social Media, SEM, and Display Advertising and define which channel is the main driver of traffic to your website.
Social Media Channel
We took the four most popular social media platforms to find out whether social media campaigns have any impact on generating leads.
SEM Channel
Also, we added the results of search engines advertising campaigns to measure their effectiveness:
Display Advertising Campaigns
To define how many impressions your display advertising brings and whether your strategy needs improvement, weâll analyze the flow of traffic from these campaigns:
Weâll analyze all this data using a top-down approach: starting from the big picture, weâll drill down to the deeper levels of detail. In our case, the most logical way is to analyze the metrics of channels and then assess particular campaigns.
To gain even more insights, you can enhance the report with visual elements such as charts or diagrams that make information easier to grasp.
Letâs add these kinds of graphs to our dashboard:
And others. All of them are designed to communicate the results of the analysis as effectively as possible. There are no limitations on your choice as we hope you’ll discover your unique way of visualizing the data.
Now it’s time to estimate what weâve achieved by following these guidelines.
For example, the next stacked column chart presents how many leads, opportunities, and sales were generated monthly by all the channels:
Moreover, we can observe which channels are the drivers of revenue:
From the chart above itâs clear that the SEM channel brought the least profit and we should analyze Display Ads and Social channels more closely:
This chart answers the question: âWhich campaigns were the most profitable?â
Afterward, letâs dig deeper into the analysis of social media campaigns:
Using these charts, we’ve checked whether the bounce rate was high and how many leads, opportunities and sales were delivered by social media campaigns.
Likewise, you can create interactive dashboards for the rest of the campaigns by using the dashboard templates weâve carefully prepared for you.
Now that you understand the results of your marketing efforts, feel free to share the report with colleagues or boss.
Today you’ve learned how to use a pivot table as a leading tool for doing research on the effectiveness of campaigns. We do hope you are inspired to use it for improving your strategic marketing goals.
Experience is the best teacher. We’ve prepared the interactive demos for you – load your own data and try exploring it to reveal more details about your marketing campaigns.
Search the ways to rock the data with us. Stay tuned not to miss new blog posts!