Posted by: psilva | August 19, 2014

Is IoT Hype For Real?


It is only fitting that the 20th anniversary of the Gartner Hype Cycle has the Internet of Things right at the top of the coaster. IoT is currently at the peak of Inflated Expectations. The Gartner Hype Cycle give organizations an assessment of the maturity, business benefit and future direction of more than 2,000 technologies. The theme for this year’s Emerging Technologies Hype Cycle is Digital Business.

HC_ET_2014

As you can see, being at the top really means that there is a ton of media coverage about the technology, so much so that it starts to get a little silly. Everyone is talking about it, including this author. What you can also so is the downward trend to follow. This is the trough of disillusionment. Gamification, Mobile Health Monitoring and Big Data all fall into this area. It means that they already hit their big hype point but doesn’t necessarily mean that it’s over. The slope of enlightenment shows technologies that are finally mature enough to actually have reasonable expectations about. Each of the technologies also have a time line of when it’ll mature. For IoT, it looks like 5 to 10 years. So while we’re hearing all the noise about IoT now, society probably won’t be fully immersed for another decade…even though we’ll see gradual steps toward it over the next few years.

Once all our people, places and things are connected, you can also get a sense of what else is coming in the Innovation Trigger area. Come the 2025 time frame, things like Smart Robots, Human Augmentation and a Brain Computer Interface could be the headlines of the day. Just imagine, instead of having to type this blog out on a keyboard, I could simply (and wirelessly) connect my brain chip to the computer and just think this.

Hey, Stop reading my mind!!

ps

Related:

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | August 13, 2014

The Internet of Sports


Did you see what the NFL is doing this year with sensors?

NFLSensor Image - Courtesy NFL Earlier this month they announced a partnership with Zebra Technologies, a company that provides RFID chips for applications from ‘automotive assembly lines to dairy cows’ milk production.’ This season there will be sensors in the player’s shoulder pads which will track all their on field movements. This includes player acceleration rates, top speed, length of runs, and even the distance between a ball carrier and a defender. Next year they’ll add sensors for breathing, temperature and heart rate. More stats than ever and could change the game for-ever. Imagine coaches being able to examine that data and instantly call a play based on it. Play by play. To me it somewhat takes away that ‘feel’ for the game flow but also having data to confirm or deny that feeling might make for exciting games. Maybe lots of 0-0 overtimes or a 70-0 blowout. Data vs. data. Oh how do I miss my old buzzing electric football game.

The yardsticks will have chips along with the refs and all that data is picked up by 20 RFID receivers placed throughout the stadium. Those, in turn, are wired to a hub and server which processes the data. 25 times a second, data will be transmitted to the receivers and the quarter sized sensors use a typical watch battery. The data goes to the NFL ‘cloud’ and available in seconds. The only thing without a sensor is the ball. But that’s probably coming soon since we already have the 94Fifty sensor basketball.

And we’ve had the NASCAR RACEf/x for years and this year they are going to track every turn of the wrench with RFID tracking in the pits and sensors on the crew. Riddell has impact sensors in their helmets to analyze, transmit and alert if an impact exceeds a predetermined threshold. They can measure the force of a NBA dunk; they can recognize the pitcher’s grip and figure out the pitch; then the bat sensor that can measure impact to the ball, the barrel angle of their swings, and how fast their hands are moving; and they are tracking soccer player movement in Germany. Heck, many ordinary people wear sensor infused bracelets to track their activity.

We’ve come a long way since John Madden sketched over a telestrator years ago and with 300 plus lb. players running around with sensors, this is truly Big Data. It also confirms my notion that the IoT should really be the Internet of Nouns – the players, the stadiums and the yardsticks.

ps

Related:

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | August 12, 2014

Highly Available Hybrid


Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply.

  • Eliminate single points of failure by adding redundancy so if one component fails, the entire system still works.
  • Have reliable crossover to the duplicate systems so they are ready when needed.
  • And have the ability to detect failures as they occur so proper action can be taken.

If the first two are in place, hopefully you never see a failure but maintenance is a must.

BIG-IP high availability (HA) functionality, such as connection mirroring, configuration synchronization, and network failover, allow core system services to be available for BIG-IP to manage in the event that a particular application instance becomes unavailable. Organizations can synchronize BIG-IP configurations across data centers to ensure the most up to date policy is being enforced throughout the entire infrastructure. In addition, BIG-IP itself can be deployed as a redundant system either in active/standby or active/active mode.

soldiag

Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available.

This is a tried and true design for most operations and provides resilient application availability within a typical data center. But fault tolerance between two data centers is even more reliable than multiple servers in a single location, simply because that one data center is a single point of failure.

A hybrid data center approach allows organizations to not only distribute their applications when it makes sense but can also provide global fault tolerance to the system overall. Depending on how an organization’s disaster recovery infrastructure is designed, this can be an active site, a hot-standby, some leased hosting space, a cloud provider or some other contained compute location. As soon as that server, application, or even location starts to have trouble, organizations can seamlessly maneuver around the issue and continue to deliver their applications.

Driven by applications and workloads, a hybrid data center is really a technology strategy of the entire infrastructure mix of on premise and off-premise data compute resources.  IT workloads reside in conventional enterprise IT (legacy systems), an on premise private cloud (mission critical apps), at a third-party off-premise location (managed, hosting or cloud provider) or a combination of all three.

The various combinations of hybrid data center types can be as diverse as the industries that use them. Enterprises probably already have some level of hybrid, even if it is a mix of owned space plus SaaS. Enterprises typically like to keep sensitive assets in house but have started to migrate workloads to hybrid data centers. Financial industries might have different requirements than retail. Startups might start completely with a cloud based service and then begin to build their own facility if needed. Mobile app developers, particularly games, often use the cloud for development and then bring it in-house once it is released. Enterprises, on the other hand, have (historically) developed in house and then pushed out to a data center when ready. The variants of industries, situations and challenges the hybrid approach can address is vast.

Manage services rather than boxes.

ps

Related

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | July 30, 2014

Internet of Things OWASP Top 10


The Open Web Application Security Project (OWASP) is focused on improving the security of software. Their mission is to make software security visible, so that individuals and organizations worldwide can make informed decisions about true software security risks and their OWASP Top 10 provides a list of the 10 Most Critical Security Risks. For each risk it provides a description, example vulnerabilities, example attacks, guidance on how to avoid and references to OWASP and other related resources. Many of you are familiar with their Top 10 Most Critical Web Application Security Risks. They provide the list for awareness and guidance on some of the critical web applications security areas to address. It is a great list and many security vendors point to it to show the types of attacks that can be mitigated.

Now the Internet of Things (IoT) has its own OWASP Top 10.

If you’ve lived under a rock for the past year, IoT or as I like to call it, the Internet of Nouns, is this era where everyday objects – refrigerators, toasters, thermostats, cars, sensors, etc – are connected to the internet and can send and receive data. There have been tons of articles covering IoT over the last 6 months or so, including some of my own.

The OWASP Internet of Things (IoT) Top 10 is a project designed to help vendors who are interested in making common appliances and gadgets network/Internet accessible. The project walks through the top ten security problems that are seen with IoT devices, and how to prevent them.

The OWASP Internet of Things Top 10 – 2014 is as follows:

You can click on each to get a detailed view on the threat agents, attack vectors, security weaknesses, along with the technical and business impacts. They also list any privacy concerns along with example attack scenarios. Good stuff!

ps

Related:

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | July 28, 2014

The Cloud is Still a Datacenter Somewhere


Application delivery is always evolving. Initially, applications were delivered out of a physical data center, either dedicated raised floor at the corporate headquarters or from some leased space rented from one of the web hosting vendors during the late 1990’s to early 2000’s or some combination of both. Soon global organizations and ecommerce sites alike, started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy and disaster recovery challenges. This was an expensive endeavor back then even without adding the networking, bandwidth and leased line costs.

When server virtualization emerged and organizations had the ability to divide resources for different applications, content delivery was no longer tethered 1:1 with a physical device. It could live anywhere. With virtualization technology as the driving force, the cloud computing industry was formed and offered yet another avenue to deliver applications.

Application delivery evolved again.

As cloud adoption grew, along with the Softwares, Platforms and Infrastructures enabling it, organizations were able to quickly, easily and cost effectively distribute their resources around the globe. This allows organizations to place content closer the user depending on location, and provides some fault tolerance in case of a data outage.

Today, there is a mixture of options available to deliver critical applications. Many organizations have on-premises private, owned data center facilities, some leased resources at a dedicated location and maybe even some cloud services. In order to achieve or even maintain continuous application availability and keep up with the pace of new application rollouts, many organizations are looking to expand their data center options, including cloud, to ensure application availability. This is important since 84% of datacenters had issues with power, space and cooling capacity, assets, and uptime that negatively impacted business operations according to IDC. This leads to delays in application rollouts, disrupted customer service or even unplanned expenses to remedy the situation.

Operating in multiple data centers is no easy task, however, and new data center deployments or even integrating existing data centers can cause havoc for visitors, employees and IT staff alike. Critical areas of attention include public web properties, employee access to corporate resources and communication tools like email along with the security and required back end data replication for content consistency. On top of that, maintaining control over critical systems spread around the globe is always a major concern.

A combination of BIG-IP technologies provides organizations the global application services for DNS, federated identity, security, SSL offload, optimization & application health/availability to create an intelligent cost effective, resilient global application delivery infrastructure across a hybrid mix of data centers. Organizations can minimize downtime, ensure continuous availability and have on demand scalability when needed.

Simplify, secure and consolidate across multiple data centers while mitigating impact to users or applications.

ps

Related:

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | July 22, 2014

Fear and Loathing ID Theft


Do you avoid stores that have had a credit card breach?

You are not alone. About 52% of people avoid merchants who have had a data breach according to a recent Lowcards survey. They surveyed over 400 random consumers to better understand the impact of identity theft on consumer behavior. 17% said they or a family member was a victim of identity theft over the last year with half the cases being credit card theft. 94% said they are more concerned or equally concerned about ID theft. They estimate that there were 13.5 million cases of credit card identity theft in the United States over the last 12 months.

These concerns are also changing the way some people shop.

Over half (56%) are taking extra measures to protect themselves from identity theft. Some of these behaviors include using a debit card less (28%), using cash more (25%), ordering online less (26%) and checking their credit report more (38%). These are all reasonable responses to the ever challenging game of protecting your identity and is important since 89% of security breaches and data loss incidents could have been prevented last year, according to the Online Trust Alliance’s 2014 Data and Breach Protection Readiness Guide.

The game is changing however, and mobile is the new stadium. Let’s check that scoreboard.

Most of the security reports released thus far in 2014, like the Cisco 2014 Annual Security Report and the Kaspersky Security Bulletin 2013 show that threats to mobile devices are increasing. We are using them more and using them for sensitive activities like shopping, banking and storing personally identifiable information. It is no wonder that the thieves are targeting mobile and getting very good at it. Kaspersky’s report talks about the rise of mobile botnets and the effectiveness since we never shut off our phones. They are always ready to accept new tasks either from us or, a foreign remotely controlled server with SMS trojans leading the pack. Mobile trojans can even check on the victim’s bank balance to ensure the heist is profitable and some will even infect your PC when you USB the phone to it.

stat_ksb_2013_04

Distribution of exploits in cyber-attacks by type of attacked application

I guess the good news is that people are becoming much more aware of the overall risks surrounding identity theft and breaches but will the convenience and availability of mobile put us right back in that dark alley? Mobile threats are starting to reach PC proportions with online banking being a major target and many of the potential infections are delivered via SMS messages. Sound familiar?

Maybe we can simply cut and replace ‘PC’ with ‘Mobile’ on all those decade old warnings of:

Watch what you click!

ps

Related

 

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | July 14, 2014

Apps Driving Attention


The mobile platform, meaning tablets and smartphones, now account for 60% of total digital media time spent according to comScore. This is a 10 point jump from 50% just a year ago. On top of that, mobile apps accounted for 51% of all digital media time spent in May 2014. Many of the content categories like radio, photos and maps are becoming almost exclusively mobile. Digital radio and photos both generate 96% of their engagement from mobile while maps and instant messaging get 90% of interaction from mobile devices.

You might be wondering, like I did, where do social networks come in since it seem like almost everyone updates their social feeds through mobile. Social is actually the #1 category for overall digital engagement taking about 20% of overall digital time spent and gets 71% of it’s activity from mobile. It, social media engagement on mobile, has grown 55% over the last year and has accounted for 31% of all growth of internet engagements.

Share of Time Spent by Platform Leading Categories

So who is driving the mobile app explosion? Teenagers. About 60% of 12 to 17 year olds had a smartphone in 2013, topping even the 45+ crowd for smartphone ownership, according to Arbitron and Edison Research. The app money makers are not the initial charge for the program but all the in-app purchases along with the ads attached to the app.

Mobile is clearly the new way we consume digital content and continues to grow. We are also interacting with specific apps rather than browsing and those apps are growing at an amazing pace. Today’s infrastructure needs to be even more flexible, intelligent and resilient to handle the surge. And ultimately, the apps and the content/experience they provide need to be highly available and delivered quickly and securely to the person…just like any other typical application.

ps

Related:

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | July 1, 2014

Will the Cloud Soak Your Fireworks?


This week in the States, the Nation celebrates it’s Independence and many people will be attending or setting off their own fireworks show. In Hawaii, fireworks are shot off more during New Year’s Eve than on July 4th and there is even Daytime Fireworks now.

Cloud computing is exploding like fireworks with all the Oooooooo’s and Ahhhhhhh’s of what it offers but the same groan, like the traffic jam home, might be coming to an office near you.

Recently, Ponemon Institute and cloud firm Netskope released a study Data Breach: The Cloud Multiplier Effect, indicating that 613 IT and security professionals felt that deploying resources in the cloud triples the probability of a major breach. Specifically, a data breach with 100,000+ customer records compromised, the cost would be just over $20 million, based on Ponemon Institute’s May 2014 ‘Cost of a Data Breach’. With a breach of that scale, using cloud services may triple the risk of a data breach. It’s called the ‘cloud multiplier effect’ and it translates to a 3% higher risk of a data breach for every 1% increase in the use of cloud services. So if you had 100 cloud services, you would only need to add 25 more to increase the possibility of a data breach by 75%, according to the study.

69% of the respondents felt that their organizations are not proactive in assessing what data is too sensitive to be stored in the cloud and 62% said that the cloud services their companies are using are not fully tested to make sure they are secure. Most, almost three-quarters, believed they would not even be notified of a breach that involved lost or stolen intellectual property/business confidential or even customer data. Not a lot of confidence there. The security respondents felt around 45% of all software applications used by the company were cloud based yet half of those had no IT visibility.

This comes at a time when many organizations are looking to the cloud to solve a bunch of challenges. At the same time, this sounds a lot like the cloud concerns of year’s past – security and risk – plus this is the perception of…not necessarily the reality of what’s actually occurring. It very well could be the case – with all the parts, loss of control, out in the wild, etc – that the risk is greater.

And I think that’s the point. The risk.

While cloud does offer organizations amazing opportunities, what these people are saying is that companies need to do a better job at the onset, in the beginning and during the evaluations, to understand the risk of the type(s) of data getting sent to the cloud along with the specific cloud service that holds it. It has only been a few years that the cloud has been taken seriously and from the beginning there have been grumblings about the security risks and loss of control. Some cloud providers have addressed many of those concerns and organizations are subscribing to services or building their own cloud infrastructure. It is where IT is going.

But still,as with any new technology bursting with light, color and noise, take good care where and when you light the fuse.

ps

Related

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]
Posted by: psilva | June 26, 2014

Velocity 2014 – That’s a Wrap!


I wrap it up from Velocity Conference 2014. Special thanks to Dawn Parzych, Robert Haynes, Cyrus Rafii and John Giacomoni for joining us on camera along with Robert, Courtney, and Natasha behind the lens. Also learn what the top questions asked this week at F5 booth 800. Reporting from the Santa Clara Convention Center, thanks for watching!

ps

Related

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]  o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]

Dawn Parzych, F5 Sr. Product Manager, returns to demo BIG-IP’s Image Optimization solution. Web images can consume up to 60% of a site load and BIG-IP can significantly reduce the image file size and deliver those pictures much faster to the viewer.

 

ps

Related

 

Connect with Peter: Connect with F5:
o_linkedin[1] o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.

Join 491 other followers