The World of Data Management According to Paul Sonderreger

Posted on

Get the lowdown on the new European data privacy regulations—which by the way reverberate far beyond the European Union—on this edition of the OracleNext podcast. Data expert Paul Sonderegger riffs on how the data handling framework, rolling out this year, could impact consumers—whose information is being collected—and the companies wanting to parse that information.

Sonderegger, Oracle’s chief data strategist, who is interviewed by veteran journalists  Michael Hickins and Barbara Darrow, also talks about:

  • How the move to cloud computing is complicating how data is collected, stored, and managed.
  • How the F. Scott Fitzgerald test might be a better gauge of true artificial intelligence (AI) than the more commonly applied Turing test. 
  • And how the aforementioned GDPR regulations, now rolling out in Europe, could shift the balance of power between the people whose behavior is being observed and the companies that are observing that behavior.

It’s a great chat, so sit back and enjoy. And please tune back in to hear more from Hickins, Darrow, and their guests on the OracleNext Iconoclasts series.

For more from Sonderreger, check out his Quartz story on the F. Scott Fitzgerald test as well as his blog post about common misconceptions about data disruption.

https://blogs.oracle.com/the-world-of-data-management-according-to-paul-sonderreger

Advertisements

Introducing the OracleNext Podcast

Posted on

OracleNext is a new podcast featuring wide-ranging conversations about how companies are putting emerging technologies to work. From cloud to artificial intelligence and from blockchain to customer experience, the critical choices we make today determine where our business goes next.

If you've enjoyed any of the articles that we've published on Forbes, The Wall Street Journal, Profit, Oracle Magazine, or Oracle Blogs, you can listen to OracleNext to get the stories behind these stories: you'll hear directly from veteran journalists as well as Oracle's senior leaders and product managers, and you'll get an inside view into how Oracle is thinking about and developing the next generation of technologies.

Listen to our podcast intro:

And listen to our first episode featuring Rob Preston, who was previously editor-in-chief of InformationWeek, Network Computing, and InternetWeek:

Subscribe to OracleNext on iTunes/Apple Podcasts:

  https://blogs.oracle.com/oraclenext-podcast-v2

Solving the Right HR Problems

Posted on

In this episode of OracleNext you'll hear from Rob Preston, who expands on his recent Forbes article "How To Solve The Right Problems With Your New HR System." The story is based on a series of interviews that Rob conducted with Wade Larson, the chief human resources officer at Wagstaff—a company that designs, manufactures, sells, and services industrial direct chill casting equipment for the global aluminum industry.

In this episode Rob explores:

  • The critical importance of reviewing HR processes that may be based on out-of-date requirements.
  • How to approach the rollout of a cloud-based HR solution.
  • How HR data has allowed Wagstaff to gain efficiencies and focus more resources on innovation.

 

https://blogs.oracle.com/solving-the-right-hr-problems

Vidfuse Review : Bonus + Demo

Posted on

Vidfuse – what is it? Vidfuse is a mobile and web video editing app. While video editing tools aren’t new, Vidfuse brings together an incredible Web and Mobile experience allowing a video creator to shoot video on mobile, edit and publish right from the spot. No Desktop software required.

Youtubers making millions …. everyone from craft bloggers, slime makers, fishermen and cooking shows are absolutely crushing it online. If you’ve got the content you’ve got the advertisers attention. Now you can set this up for yourself and tap into the biggest flow of advertising cashflow that’s occurred in the last 5 years.

The new app launching, Vidfuse is designed for you to achieve exactly that. Shoot, edit and share build hordes of loyal followers and let the advertising dollars flow straight to you. You even get the secrets of the best youtubers in the world as part of this launch special offer.

Vidfuse is the easiest way to create, edit and publish video content. You can use this web or mobile apps to capture the content you want, edit it to together, and publish with a single click or tap.

The distribution functionality of Vidfuse gives you powerful publishing capabilities, right there in your Vidfuse app. Publish with a single click to multiple accounts on all the most popular social and video networks.

Vidfuse Review

The next content was first seen on Vidfuse Review : Bonus + Demo

Oracle Exadata: Ten Years of Innovation

Posted on

By Bob Thome, Vice President of Product Management for Oracle Database Engineered Systems and Cloud Services 

I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations. One of the common themes was the realization that sometimes purpose-built engineering is better at solving the toughest problems. Given that 2018 marks the 10-year anniversary of the introduction of Oracle’s first engineered systemOracle Exadata, I started thinking about many of the drivers that led to the development of this system in the first place. Perhaps not surprisingly, I realized Oracle introduced Exadata for the same reason that drives other innovations—you can't reliably push the limits of technology using generalized "off-the-shelf" components.

Back in the mid-2000s, the conventional wisdom was that the best way to run mission-critical databases was to use a best-of-breed approach, stitching together the best servers, operating systems, infrastructure software, and databases to build a hand-crafted solution to meet the most demanding application requirements. Every mission-critical deployment was a challenge in those days, as we struggled to overcome hardware, firmware, and software incompatibilities in the various components in the stack. Beyond stability, we found it difficult to meet the needs of a new class of extreme workloads, that exceeded the performance envelopes of the various components. What we found was we were not realizing the true potential of the components, as we were limited by the traditional boundaries of dedicated compute servers, “dumb” storage, and general-purpose networking.

We revisited the problem we were trying to solve:

  • Performance: Optimize the performance of each component in the stack and eliminate bottlenecks when processing our specific workload.
  • Availability: Provide end-to-end availability, from the application through the networking and storage layers.
  • Security: Protect end-user data from a variety of threats both internal and external to the system.
  • Manageability: Reduce the management burden to operate these systems.
  • Scalability: Grow the system as the customer's data processing demands ballooned.
  • Economics: Leverage the economics of commodity components while exceeding the experience offered by specialized mission-critical components.

Reviewing these objectives in light of the limits of the best-of-breed technology led to a simple solution—extend the engineering beyond the individual components and across the stack. In other words, engineer a purpose-built solution to provide extreme database services. In 2008, the result of this effort, Oracle Exadata, was launched.

The mid-2000s saw explosive growth in compute power, as Intel continually launched new CPUs with greater and greater numbers of cores. But databases are I/O hungry beasts, and I/O was stuck in the slow lane. Organizations were deploying more and more applications on larger and larger SANs, connecting the servers to the storage with shared-bandwidth pipes that were fast becoming a bottleneck for any I/O intensive application. The economics and complexity of SANs made it difficult to provide databases the bandwidth they required, and the result was lots of compute power starved for data. The burning question of the day was, “how can we more effectively get data from the storage array to the compute server.”

The answer, in hindsight, was quite simple, although quite difficult to engineer. If you can’t bring the data to the compute, bring the compute to the data. The difficulty was you couldn’t do this with a commercial storage array—you needed a purpose-built storage server that could cooperatively with the database process vast amounts of data, offloading processing to the storage servers and minimizing the demands on the storage network. From that insight, Exadata was born.

Over the years, we’ve built upon this engineered platform, refining the architecture of the system to improve performance, availability, security, manageability, and scalability, all while using the latest technology and components and minimizing overall system cost. 

Innovations Exadata has brought to market:

  • Performance: Pushing work from the compute nodes to the storage nodes spreads the workload across the entire system while eliminating I/O bottlenecks; intelligent use of flash in the storage system provides flash-based performance with hard disk economics and capacities. The Exadata X7-2 server can scan 350GB/sec, 9x faster than a system using an all-flash storage array.
  • Availability: Proven HA configurations based on Real Application Clusters running on redundant hardware components ensures maximum availability; intelligent software identifies faults throughout the system and reacts to minimize or mask application impact. Customers are routinely running Exadata solutions in 24/7 mission-critical environments with 99.999% availability requirements.
  • Security: Full stack patching and locked down best-practice security profiles minimize attack vulnerabilities.  Build PCI DSS compliant systems or easily meet DoD security guidelines via Oracle-provided STIG hardening tools.
  • Manageability: Integrated systems management and tools specifically designed for Exadata simplify the management of the database system. New fleet automation can update multiple systems in parallel, enabling customers to update hundreds of racks in a weekend.
  • Scalability: Modular building blocks connected by a high-speed, low-latency Infiniband fabric enable a small entry-level configuration to scale to support the largest workloads. Exadata is New York Stock Exchange’s primary transactional database platform supporting roughly one billion transactions per day.
  • Economics: Built from industry standard components to leverage technology innovations provides industry-leading price performance. Exadata’s unique architecture provides better than all-flash performance, at low-cost HDD capacity and cost.

Customers have aggressively adopted Exadata, to host their most demanding and mission-critical database workloads. Chances are you indirectly touch an Exadata every day—by visiting an ATM, buying groceries, reserving an airline ticket, paying a bill, or just browsing the internet. Four of the top five banks, telcos, and retailers run Exadata. Fidelity Investments moved to Exadata and improved reporting performance by 42x. Deutsche Bank shaved 20% off their database costs, while doubling performance. Starbucks leveraged Exadata’s sophisticated Hybrid Columnar Compression technology to analyze point-of-sale data while saving over 70% on storage requirements. Lastly, after adopting Exadata, Korea Electric Power processes load information from their power substations 100 times faster allowing them to analyze load information in real time to ensure the lights stay on.

The funny thing about technology is you must keep innovating. Given today’s shift to the cloud, all the great stuff we’ve done for Exadata, could soon be irrelevant—or will it? The characteristics and technology of Exadata have been successful for a reason—that’s what it takes to run enterprise-class applications! The cloud doesn’t change that. Just as in an on-premise world where people don’t run their mission-critical business databases on virtual machines (because they can’t) customers migrating to the cloud will not magically be able to suddenly run those same mission-critical business databases in VMs hosted in the cloud. They need a platform that meets their performance, availability, security, manageability and scalability requirements, at a reasonable cost. 

Our customers have told us they want to migrate the cloud, but they don’t want to forgo the benefits they realize running Exadata on-premises. For these customers, we now offer Exadata in the cloud. Customers get a dedicated Exadata system, with all the characteristics they’ve come to appreciate, but hosted in the cloud, with all the benefits of a cloud deployment: pay-as-you-go, simplified management, self-service, on-demand elasticity, paid for with a predictable operational expense budget with no customer-owned datacenter required.

However, not everyone is ready to move to the cloud. While the economics and elasticity are extremely attractive to many customers, we’ve repeatedly found customers unwilling to put their valuable data outside their firewalls. It may be because of regulatory issues, privacy issues, data center availability, or just plain conservative tendencies towards IT—they are not able or willing to move to the cloud. For these customers, we offer Exadata Cloud at Customer, an offering that puts the Exadata Cloud Service in your data center, offering cloud economics, with on-premises control.

So, it’s been a wild 10 years, and we are continuing to look for ways to innovate with Exadata. No matter whether you need an on-premises database, a cloud solution, or are looking to bridge the two worlds with Cloud at Customer, Exadata remains the premier choice for running databases. Look for continued innovation, as we adopt new fundamental technologies, such as lower-cost flash storage and non-volatile memory, that promise to revolutionize the database landscape. Exadata will continue as our flagship database platform, leveraging these new technologies, and making their benefits available to you, regardless of where you want to run your databases.

I hope this post gives you a sense of the history behind Exadata, and some of the dramatic shifts that will be affecting your databases in the future. This is the first in a series of blog posts that will examine these technologies. Next, we will look more closely at performance, and why performance is critical in a database server, and how we’ve engineered Exadata to provide the best performance for all types of database workloads. Stay tuned.

Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. 

 

https://blogs.oracle.com/oracle-exadata%3A-ten-years-of-innovation-v3

Writing Content That Is Too In-Depth Is Like Throwing Money Out the Window

Posted on

money trash

You’ve heard people telling you that you need to write in-depth content because that’s what Google wants.

And it’s true… the average page that ranks on page 1 of Google contains 1,890 words.

word count

But you already know that.

The question is, should you be writing 2,000-word articles? 5,000? Or maybe even go crazy and create ultimate guides that are 30,000 words?

What’s funny is, I have done it all.

I’ve even tested out adding custom images and illustrations to these in-depth articles to see if that helps.

And of course, I tested if having one super long page with tens of thousands of words or having multiple pages with 4,000 or 5,000 words is better.

So, what do you think? How in-depth should your content be?

Well, let’s first look at my first marketing blog, Quick Sprout.

Short articles don’t rank well

With Quick Sprout, it started off just like any normal blog.

I would write 500 to 1,000-word blog posts and Google loved me.

Just look at my traffic during January 2011.

quicksprout 2011

As you can see, I had a whopping 67,038 unique visitors. That’s not too bad.

Even with the content being short, it did fairly well on Google over the years.

But over time, more marketing blogs started to pop up, competition increased, and I had no choice but to write more detailed content.

I started writing posts that were anywhere from 1,000 to a few thousand words. When I started to do that, I was able to rapidly grow my traffic from 67,038 to 115,759 in one year.

quicksprout 2012

That’s a 72.67% increase in traffic in just 1 year.

It was one of my best years, and all I had to do was write longer content.

So naturally, I kept up with the trend and continually focused on longer content.

But as the competition kept increasing, my traffic started to stagnate, even though I was producing in-depth content.

Here are my traffic stats for November 2012 on Quick Sprout.

quicksprout 2012

I understand that Thanksgiving takes place in November, hence traffic wasn’t as high as it could be. But still, there really wasn’t any growth from January to November of 2012.

In other words, writing in-depth content that was a few thousand words max wasn’t working out.

So what next?

Well, my traffic had plateaued. I had to figure something else out.

Writing longer, more in-depth content had helped me before… so I thought, why not try the 10x formula.

I decided to create content 10 times longer, better, and more in-depth than everyone else. I was going to the extreme because I knew it would reduce the chance of others copying me.

Plus, I was hoping that you would love it as a reader.

So, on January 24, 2013, I released my first in-depth guide.

It was called The Advanced Guide to SEO.

advanced guide to seo

It was so in-depth that it could have been a book.

Literally!

Heck, some say it was even better than a book as I paid someone for custom illustration work.

Now let’s look at the traffic stats for January 2013 when I published the guide.

quicksprout 2013

As you can see my traffic really started to climb again.

I went from 112,681 visitors in November to 244,923 visitors in January. Within 2 months I grew my traffic by 117%.

That’s crazy!!!!

The only difference: I was creating content that was so in-depth that no one else dared to copy to me (at that time).

Sure, some tried and a few were able to create some great content, but it wasn’t like hundreds of competing in-depth guides were coming out each year. Not even close!

Now, when I published the guide I broke it down into multiple chapters like a book because when I tested out making it one long page, it loaded so slow that the user experience was terrible.

Nonetheless, the strategy was effective.

So what did I do next?

I created 12 in-depth guides

I partnered up with other marketers and created over 280,000 words of marketing content. I picked every major subject… from online marketing to landing pages to growth hacking.

I did whatever I could to generate the most traffic within the digital marketing space.

It took a lot of time and money to create all 12 of these guides, but it was worth it.

By January of 2014, my traffic had reached all-time highs.

quicksprout 2014

I was generating 378,434 visitors a month. That’s a lot for a personal blog on marketing.

Heck, that’s a lot for any blog.

In other words, writing 10x content that was super in-depth worked really well. Even when I stopped producing guides, my traffic, continually rose.

Here’s my traffic in January 2015:

quicksprout 2015

And here’s January 2016 for Quick Sprout:

quicksprout 2016

But over time something happened. My traffic didn’t keep growing. And it didn’t stay flat either… it started to drop.

quicksprout 2017

In 2017, my traffic dropped for the first time.

It went from 518,068 monthly visitors to 451,485. It wasn’t a huge drop, but it was a drop.

And in 2018 my traffic dropped even more:

quicksprout 2018

I saw a huge drop in 2018. Traffic went down to just 297,251 monthly visitors.

And sure, part of that is because I shifted my focus to NeilPatel.com, which has become the main place I blog now.

But it’s largely that I learned something new when building up NeilPatel.com.

Longer isn’t always better

Similar to Quick Sprout, I have in-depth guides on NeilPatel.com.

I have guides on online marketing, SEO, Google ads, Facebook ads, and the list goes on and on.

If you happened to click on any of the guides above you’ll notice that they are drastically different than the ones on Quick Sprout.

Here are the main differences:

  • No fancy design – I found with the Quick Sprout experience, people love the fancy designs, but over time content gets old and outdated. To update content when there are so many custom illustrations is tough, which means you probably won’t update it as often as you should. This causes traffic to go down over time because people want to read up-to-date and relevant information.
  • Shorter and to the point – I’ve found that you don’t need super in-depth content. The guides on NeilPatel.com rank in similar positions on Google and cap out at around 10,000 words. They are still in-depth, but I found that after 10,000 or so words there are diminishing returns.

Now let’s look at the stats.

Here’s the traffic to the advanced SEO guide on Quick Sprout over the last 30 days:

quicksprout seo guide

Over 7,842 unique pageviews. There are tons of chapters and as you can see people are going through all of them.

And now let’s look at the NeilPatel.com SEO guide:

neil patel seo guide

I spent a lot less time, energy, and money creating the guide on NeilPatel.com, yet it receives 17,442 unique pageviews per month, which is more than the Quick Sprout guide. That’s a 122% difference!

But how is that possible?

I know what you are thinking. Google wants people to create higher quality content that benefits people.

So how is it that the NeilPatel.com one ranks higher.

Is it because of backlinks?

Well, the guide on Quick Sprout has 850 referring domains:

links quicksprout

And the NeilPatel.com has 831 referring domains:

links neil patel

Plus, they have similar URL ratings and domain ratings according to Ahrefs so that can’t be it.

So, what gives?

Google is a machine. It doesn’t think with emotions, it uses logic. While we as a user look at the guide on Quick Sprout and think that it looks better and is more in-depth, Google focuses on the facts.

See, Google doesn’t determine if one article is better than another by asking people for their opinion. Instead, they look at the data.

For example, they can look at the following metrics:

  • Time on site – which content piece has a better time on site?
  • Bounce rate – which content piece has the lowest bounce rate?
  • Back button – does the article solve all of the visitors’ questions and concerns? So much so they visitor doesn’t have to hit the back button and go back to Google to find another web page?

And those are just a few things that Google looks at from their 200+ ranking factors.

Because of this, I took a different approach to NeilPatel.com, which is why my traffic has continually gone up over time.

Instead of using opinion and spending tons of energy creating content that I think is amazing, I decided to let Google guide me.

With NeilPatel.com, my articles range from 2,000 to 3,000 words. I’ve tried articles with 5,000+ words, but there is no guarantee that the more in-depth content will generate more traffic or that users will love it.

Now to clarify, I’m not trying to be lazy.

Instead, I’m trying to create amazing content while being short and to the point. I want to be efficient with both my time and your time while still delivering immense value.

Here’s the process I use to ensure I am not writing tons of content that people don’t want to read.

Be data driven

Because there is no guarantee that an article or blog post will do well, I focus on writing amazing content that is 2,000 to 3,000-words long.

I stick within that region because it is short enough where you will read it and long enough that I can go in-depth enough to provide value.

Once I release a handful of articles, I then look to see which ones you prefer based on social shares and search traffic.

Now that I have a list of articles that are doing somewhat well, I log into Google Search Console and find those URLs.

You can find a list of URLs within Google Search Console by clicking on “Search Traffic” and then “Search Analytics”.

You’ll see a screen load that looks something like this:

search console queries

From there you’ll want to click on the “pages” button. You should be looking at a screen that looks similar to this:

search console pages

Find the pages that are gaining traction based on total search traffic and social shares and then click on them (you can input URLs into Shared Count to find out social sharing data).

Once you click on the URL, you’ll want to select the “Queries” icon to see which search terms people are finding that article from.

page queries

Now go back to your article and make it more in-depth.

And when I say in-depth, I am not talking about word count like I used to focus on at Quick Sprout.

Instead, I am talking depth… did the article cover everything that the user was looking for?

If you can cover everything in 3,000 words then you are good. If not, you’ll have to make it longer.

The way you do this is by seeing which search queries people are using to find your articles (like in the screenshot above). Keep in mind that people aren’t searching Google in a deliberate effort to land on your site… people use Google because they are looking for a solution to their problem.

Think of those queries that Google Search Console is showing you as “questions” people have.

If your article is in-depth enough to answer all of those questions, then you have done a good job.

If not, you’ll have to go more in-depth.

In essence, you are adding more words to your article, but you aren’t adding fluff.

You’re not keyword stuffing either. You are simply making sure to cover all aspects of the subject within your article.

This is how you write in-depth articles and not waste your time (or money) on word count.

And that’s how I grew NeilPatel.com without writing too many unnecessary words.

Conclusion

If you are writing 10,000-word articles you are wasting your time. Heck, even articles over 5,000 words could be wasting your time if you are only going after as many words as possible and adding tons of fluff along the way.

You don’t know what people want to read. You’re just taking a guess.

The best approach is to write content that is amazing and within the 2,000 word to 3,000-word range.

Once you publish the content, give it a few months and then look at search traffic as well as social sharing data to see what people love.

Take those articles and invest more resources into making them better and ultimately more in-depth (in terms of quality and information, not word count).

The last thing you want to do is write in-depth articles on subjects that very few people care about.

Just look at the Advanced Guide to SEO on Quick Sprout… I made an obvious mistake. I made it super in-depth on “advanced SEO”. But when you search Google for the term “SEO” and you scroll to the bottom to see related queries you see this…

seo related

People are looking for the basics of SEO, not advanced SEO information.

If I wrote a 2,000-word blog post instead of a 20,000-word guide, I could have caught this early on and adapted the article more to what people want versus what I thought they wanted.

That’s a major difference.

So how in-depth are you going to make your content?

The post Writing Content That Is Too In-Depth Is Like Throwing Money Out the Window appeared first on Neil Patel.