top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How does Big Data analytic differ from the conventional analytic?

+2 votes
600 views
How does Big Data analytic differ from the conventional analytic?
posted Jul 19, 2013 by Ujjwal Sinha

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button
may not relevant here, but see this article, thought to share
http://bits.blogs.nytimes.com/2013/06/01/why-big-data-is-not-truth/

1 Answer

0 votes

Big Data analytics go far beyond just analyzing very large datasets. They center on use cases that require the integration and analysis of different data types and sources. These range from marketing use cases such as advanced web analytics, online product usage (cohort) analysis and social media sentiment analytics, security and risk use cases such as fraud detection, identifying rogue trader activity and asset risk analytics, and scientific or research use cases in medical, pharma and related industries.
With traditional business intelligence tools, users are only able to analyze structured data, which limits the amount and kinds of analysis they can perform on their data. With big data analytics, it is now possible to quickly bring together and work with all types of data from any number of data sources, whether it’s structured transaction data or semi-structured or unstructured data such as weblogs, social media data or emails. Analysis of all data helps users understand and analyze both customer transactions and interactions and lets them answer questions and find insights that simply are impossible from traditional BI and structured data alone. For example, companies want to understand the entire customer engagement cycle from a web ad all the way through to a purchase, either online on in a store, so that they can see just which web ads actually result in the highest purchase percentage, rather than just the highest click rates. This requires the integration and correlation of weblogs, clickstream analytics and transaction data in order to get the complete picture.

answer Jul 19, 2013 by Mithalesh Gupta
Similar Questions
+2 votes

Let we change the default block size to 32 MB and replication factor to 1. Let Hadoop cluster consists of 4 DNs. Let input data size is 192 MB. Now I want to place data on DNs as following. DN1 and DN2 contain 2 blocks (32+32 = 64 MB) each and DN3 and DN4 contain 1 block (32 MB) each. Can it be possible? How to accomplish it?

+2 votes

I declared a variable and incremented/modified it inside Mapper class. Now I need to use the modified value of that variable in Driver class. I declared a static variable inside Mapper class and its modified value works in Driver class when I run the code in Eclipse IDE. But after creating that code as a runable jar from Eclipse and run jar file as “$ hadoop jar filename.jar input output” modified value does not reflect (value is 0) in Driver class.

...