Author - admin

It’s all about integration and optimization

Wind power has reached the stage where squeezing out every cent of cost matters as much as providing the relevant functionality for power system support

I hope it’s clear by now to everyone that wind power is an essential part of our generation portfolio as we move towards a greener world. In recent decades, we have watched wind technology evolve from simple backyard generators into actual power plants, some with the power capacity of a gigawatt, or more. Turbine technology has grown from simple blades and rotors using robust induction generators to sophisticated power generating machines featuring advanced blade designs and actuators, supported by highly developed power electronics and control intelligence systems that can turn an array of turbines into a versatile, cohesive power production facility.

Continuous updates of grid code requirements have mostly driven these technology advances, which means grid operators are defining the functionality of wind power plants. As the penetration of wind power and other renewables in our power systems increases, the plants generating this power must be able to provide relevant system support so that reliable supplies of quality power can reach end users. For renewable plants this list of essential requirements includes both active and reactive power support, power regulation capacity, provision of ancillary services such as inertia and frequency support, and black start capabilities or power forecasting. Some of these points are applicable today, while others will be relevant in future versions of grid codes. But they are all about safely and reliably integrating renewable power into our power systems – and ensuring we keep the lights on with grid-quality supplies of green energy.

Apart from the challenges of complying with grid codes, which mainly concern wind turbine manufacturers anyway, wind plant owners and operators must also find ways to reduce the cost of wind energy so that it is an effective generation strategy in today’s energy scenario, especially as we move away from preferential tariffs to an open market for all.

Operations and maintenance is one area where owners and operators can effectively combat the costs in their wind power plants, and some of the solutions and tools under the magnifying glass include predictive scheduling of maintenance activities, maintenance on low wind days, and the use of qualified local third-party personnel to help minimize costs and improve annual energy production. The wish list of desired functionality includes applications that can crunch large amounts of data and detect the cause of failures or even predict failures, as well as systems to control and optimize algorithms and reduce the load on assets, while at the same time maintaining maximum power production potential. It is all about optimizing wind power production, to pull as much value as possible from the technology.

All of this tells me that as wind power technology moves steadily forward, it is becoming a stable, mature power generation industry, with its own best practices and standard methodologies that are acknowledged and understood by everybody.

Read more...

The Chamber of Commerce in Genoa awards ABB Italy an historic Italian business certificate for 100 years of uninterrupted business

ABB was recently awarded an historic Italian business certificate by the Chamber of Commerce in Genoa, which recognizes companies that have operated without interruption for more than 100 years. As the expected lifetime of modern corporations shrinks, being in business for more than a century has become a real achievement.

The average lifespan of a company listed on the S&P 500 index has decreased from 67 years in the 1920s to just 15 years today, according to a study by Professor Richard Foster at Yale University. By 2020, Professor Foster estimates more than three-quarters of the S&P 500 will be made up of companies that we have not heard of yet.

“Even the big, solid companies, the pillars of the society we live in, seem to hold out for not much longer than an average of 40 years,” writes Arie de Geus, a former Royal Dutch/Shell Group manager in his book about corporate survival, The Living Company.

In recent times, some former giants have vanished – car maker Saab, for instance, or financial firm Lehman Brothers, while others struggle to reinvent themselves, like former mobile phone colossus Nokia, or the iconic camera company, Kodak, now just a shadow of its former self.

And yet some companies (mostly small, family-run businesses) have survived for hundreds of years; a handful (mainly in Japan) are more than 1,000 years old. In his book, Mr. de Geus provides some core characteristics of long-lived companies, big and small.

They are sensitive to changing conditions, in business and in the societies where they do business. They have a strong sense of identity, are open to innovation and experiments by their staff, and careful about finances. They are good corporate citizens, flexible, innovative, and able to reinvent themselves – like ABB.

Nothing seems to guarantee a long corporate life more than a sound reputation as a good citizen, backed by quality products and people. ABB’s parent companies (Swedish ASEA, founded in 1883, and Swiss Brown Boveri, founded in 1891) developed those properties, as well as a practice of innovation and customer focus that made them stand out in the corporate world.

This corporate DNA is an important legacy in the context of today’s ultra-competitive business climate.
It is fitting that the Genoa Chamber of Commerce handed ABB an historical business certificate, since the company’s predecessor, Brown Boveri, acquired the oldest electromechanical Italian company in 1903, Tecnomasio Italiano, founded in 1863.

Over the years ABB has gained experience in most of Italy’s electromechanical sector, and acquired important companies that have contributed to Italy’s industrial history, such as Ansaldo Trasformatori, Elsag Bailey, Ercole Marelli, SACE, Officine Adda and IEL.

Today, ABB Italy’s 6,300 people are concentrated in the north and central parts of the country. The company invests 3.2% of its turnover in R&D activities, and Genoa is a main operational center, home to ABB engineering, R&D and automation facilities focused on the energy, industrial plant and marine sectors. This capacity was enhanced in 2010 when the new Sestri Ponente offices were inaugurated in Genoa, featuring an advanced and futuristic Demonstration Center for the Symphony Plus total plant automation system. And in 2012, ABB acquired RGM Polycontrol, a Genoa-based subsidiary of RGN S.p.A., which specializes in auxiliary power systems for rail vehicles.

ABB is a main actor in the Italian industrial sector, a role it has successfully played for more than 100 years. ABB invests, innovates and creates value in Italy, enhancing local roots even as it scours the world for new business opportunities.

ABB Italy is a part of this long-term legacy, which is helping to develop the Italian economy by supplying products, systems, solutions and services that boost to customer performance.

 

Read more...

Silo Busting is Essential to Delivering Personalized Experiences

This article is part of our series on customer experience where we focus on topics relating to connecting data, intelligence and experiences. Further reading: Segmentation Must Be Connected to the Data and Technology Stack.

Digital technologies have dramatically improved the experiences of consumers, making it much easier for them to find what they want and to be provided with the service levels they expect. Their product and channel choices are greatly improved, which allows them to act at their own convenience.

Yet, rather than this satisfying the contemporary consumer, the opposite has happened. Customers’ expectations have accelerated, fueled by the very improvements in customer experience that digital technologies provide.

Is it any wonder then that so many companies are failing to deliver the seamless, excellent experiences customers demand as their basic expectation?

Data Silos Breed Chaos

The culprits, in many instances, are the brands themselves and their unwillingness or inability to break down organizational and technological silos within their own companies.

Data silos occur because businesses grow and change over time without a plan on how to manage their data and because separate teams inside or outside of a business don’t always work in a consistent way.

In a report called Culture for a Digital Age, authored by Julie Goran, Ramesh Srinivasan, and Laura LaBerge, McKinsey & Company identified functional and departmental silos as one of the most crucial digital culture deficiencies companies face.

“Each obstacle is a long-standing difficulty that has become more costly in the digital age,” wrote the authors. “The narrow, parochial mentality of workers who hesitate to share information or collaborate across functions and departments can be corrosive to organizational culture.”

It is just as damaging and corrosive to the relationships brands have with customers.

No wonder analysts like Gartner say the majority of companies are diverting money into data programs this year.

Oracle digital CX evangelist Mark de Groot says, “In our research, Next Generation Customer Experience: The Death of the Digital Divide, we found that a significant number of customers aren't impressed with the digital experiences brands offer.”

The authors of the report, which surveyed 7000 people in seven countries, were blunt in their conclusions, “The cost of failing – being slow, unresponsive, unavailable or incapable of adaptation – is brutal. Customers today have higher expectations. And when disappointed or frustrated, they leave. (In the case of the millennials, they don’t even bother to say goodbye.)”

The only way to overcome the problem of fragmented experiences is to take control of data.

Cross-Channel Challenges

Marketers understand that one of the biggest problems they face with cross-channel marketing is understanding customer interactions across those channels.

But often they lack access to cross-channel analytics making it hard for them to improve performance. They also find it difficult to track KPIs across channels.

Ultimately, though, until silos are tamed, it is almost impossible to build a usable unified view of the customer’s complete relationship with the brand.

Take the healthcare sector as an example.

Gartner Research Director Mike Jones said that one of the most common objectives of the healthcare sector is delivering a birth-to-death digital health record for patients.

While that may sound simple, the reality involves very serious complexity. “Bringing information from many different healthcare systems [that have] different structures, different data formats, different approaches to sharing and governance is extremely problematic to deliver. But without that the rest of the objectives almost become unachievable.”

In more than half the programs Jones studied, organizations were focused on four objectives:

  1. Patient ownership of data
  2. Big data and analytics platforms
  3. Open architectures and open standards for interoperability
  4. Developing new citizen services, which could allow online access to records

Each of these objectives can only be satisfied once organizations have their data stories aligned.

The smart application of technology can unify data silos for the benefit of all teams and partners. Oracle’s strategy is to acquire best-of-breed technology and then use our significant development experience to integrate them.

The win for our customers: rolling upgrades that add features, fix issues and speed tasks up. Then for their customers: seamless personalized experiences that build trust and confidence in the brand.

Read more...

Additive Manufacturing and Innovation for Automotive: Pushing the Limits of Performance

At this year's Hannover Messe, Siemens and Bugatti Automobiles are presenting the world's largest hybrid functional assembly, based on 3D printed hollow and thin walled titanium metal components and ceramic-coated wound high-modulus carbon fiber tubes: an extremely lightweight yet ultra-rigid rear wing travel and adjustment system for the Bugatti Chiron.

How do you get something even more perfect?

Siemens and Bugatti will be showcasing this at Hannover Messe.

Utilizing the power of an integrated product development software platform, the already lightweight Bugatti Chiron wing system was further reduced in weight by 50% while maintaining its same rigidity.  This was accomplished by combining 3D printed titanium components, made using SLM, and carbon fiber reinforced tubes. Visitors to the Hannover Messe can see the optimized rear-wing hydraulic system on display in Hall 6 at the Siemens PLM Software booth J30.

 

The Chiron, like its predecessor, the Veyron, moves at speeds far beyond 400km/h. While a Boeing 747 jumbo-jet takes off from the ground at 280 km/h, the Bugatti super sports car must remain safely on the ground at these high speeds and cope with the extremely high transverse and longitudinal dynamic requirements. This is only possible through the use of active vehicle aerodynamics. With the help of a sophisticated hydraulic system, the distance of the vehicle to the road is precisely controlled using a rear wing that extends and aligns while vehicle front diffuser flaps open and close. This system is unique in the automotive world and its complexity is only comparable to corresponding aviation systems.

 

Siemens and Bugatti have carried out the entire innovation process, from the virtual wind tunnel to component initial design, in one software platform for the purpose of showcasing an end-to-end process at Hannover Messe. Using one platform helps to avoid time-consuming and often error-prone data conversions between different file formats and re-modeling of designs. The results are more precise and the time required to get to the first component is dramatically reduced. Even the virtual wind tunnel and the virtual test tracks are part of the Siemens vehicle engineering solution, enabling the marriage of the digital product and production twins from concept through optimization and on to production, including CNC machine control for part finishing.

“By simulating functional performance throughout the entire vehicle development process chain using one integrated platform, from aerodynamics through a highly complex body to the additive manufacturing process for lightweight components, we can gain an enormous advantage when striving to push the limits of performance for an already perfect car,” say Mr. Götzke, Head of New Technologies at Bugatti.  “We’re thrilled to collaborate with Siemens and other excellent partners to showcase what’s possible for the automotive world by combining the latest technologies and expertise.”

 

Accelerating the innovation process by 10X using Siemens’ digital innovation platform, including NX and Simcenter capabilities for generative design, composite design, and multi-physics simulation, the optimized aerodynamics control system results in reduced weight and aerodynamic drag for enhanced vehicle performance.

At Siemens, we are excited to co-innovate with a network of industry leaders like Bugatti Automobiles, Fraunhofer IAPT, East-4D, and Vogt Engineering to constantly push the limits of what’s possible with innovation. Investing in the development of new technologies, their applications and to showcase how innovation unlocks greater levels of performance for our customers is important for Siemens to help our customers, partners and the automotive industry as a whole to accelerate forward.

 

For more information about Bugatti Automobiles, please visit: www.bugatti.com

Read more...

How to move to a disruptive network

Emerging network technologies such as SDN, SD-WAN and intent-based networking promise to improve service and streamline operations, but don't let the transition process throw a wrench into existing activities.

Disruptive network technologies are great—at least until they threaten to disrupt essential everyday network services and activities. That's when it's time to consider how innovations such as SDN, SD-WAN, intent-based networking (IBN) and network functions virtualization (NFV) can be transitioned into place without losing a beat.

"To be disruptive, some disruption is often involved," says John Smith, CTO and co- founder of LiveAction, a network performance software provider. "The best way to limit this is to use proven technology versus something brand new—you never want to be the test case."

Smith suggests limiting risk by following a crawl, walk and run approach. "Define the use case and solve it while initially limiting the risk exposure to a discrete set of end users for proof of concept testing," he says. "It’s always good to ensure that the business case will drive the need for the disruptive networking technology—it helps justify the action.”

"Starting with a smaller proof of concept in a non-production environment is great way to get comfortable with the tech and gain some early operational experience," advises Shannon Weyrick, vice president of architecture at DNS and traffic management technologies provider NS1. Before launching any disruptive technology, make sure everyone involved recognizes the value, understands the technology and rollout process and agrees on the goals and metrics, he adds.

Switching safely to SDNs

Software defined networking (SDN) is designed to make networks both manageable and agile. Utilizing a proven technology that’s been in the field successfully is vital to ensuring minimal disruption around SDN deployments, Smith says. "On the data center side, Cisco ACI and VMWare NSX are reliable infrastructure technologies, but it really depends what fits best with the business," he observes.

Full network visibility is essential to minimizing disruption, as an SDN installation works out its inevitable start-up kinks. "Having visibility solutions in place, such as network performance monitoring and diagnostic (NPMD) tools, can eliminate deployment errors and quickly isolate issues," Smith explains.

Kiran Chitturi, CTO architect at Sungard Availability Services, an IT protection and recovery services provider, recommends choosing an approach that embraces open standards and encourages an open ecosystem between customers, developers and partners. "Before adopting at scale, be patient in selecting specific use-cases like optimizing networks for specific workloads, accessing control limits and so on," he says.

Start with the open source and open specification projects, suggests Amy Wheelus, network cloud vice president at AT&T. For the cloud infrastructure, the go-to open source project is OpenStack, with many operators and different use cases, including at the edge. For the service orchestration layer, ONAP is the largest project in Open Source, she notes. "At AT&T we have launched our mobile 5G network using several open source software components, OpenStack, Airship and ONAP."

Weyrick recommends "canarying" traffic before relying on it in production. "Bringing up a new, unused private subnet on existing production servers alongside existing interfaces and transitioning less-critical traffic, such as operational metrics, is one method," he says. "This allows you to get experience deploying and operating the various components of the SDN, prove operational reliability and gain confidence as you increase the percentage of traffic being transited by the new stack."

It's also important to have a backup plan on hand. "Even the best-laid plans need a fallback," Weyrick says. "Make sure you have a plan for alternate transit for critical subsystems should the SDN fail." Ideally, such strategy would include automated failover. "But even a manual plan, thought out ahead of time in worst-case scenarios, may prove helpful and increase your confidence during and after transition," he adds.

The many paths to SD-WAN adoption

There are many ways to take advantage of SD-WAN, which applies SDN's benefits to wide area networks. "Customers can leverage existing infrastructure from vendors like Riverbed or Cisco," Smith says. Organizations can also opt for new features added to security appliances, like Fortinet and WatchGuard, or they can leverage virtual forms from SaaS service providers.

Regardless of the tools selected, Smith recommends piloting the technology at a handful of selected sites. "Document the lessons you learn and use that [information] to write the MOPs (methods of procedures) needed for site cutovers once full deployment begins." He notes that potential adopters also need to understand how the technology scales. "The pilot may run great, but when you scale up to the number of planned sites you may hit unforeseen issues if you haven’t planned for them," Smith says.

SD-WAN implies the use of a controller to manage connections between a company’s branch offices, states Andrei Lipnitski, an information communication and technology department manager at software development company ScienceSoft. "To move to SD-WAN, the company needs one controller installed in the head office, with a system administrator to manage it, and to configure SD-WAN routers, which replace outdated hardware used in branch offices," he says.

SD-WAN architectures that support passthrough network setups allow the existing network setup to be left unchanged. "Once that install is complete, the network should function identical as before," notes Jay Hakin, CEO of Mushroom Networks, an SD-WAN vendor. "A good practice is to have a period of time where this setup is left running to make sure all applications and cloud services are running uninterrupted," he adds. Once reliable operation has been confirmed, adding additional WAN resources to the SD-WAN appliance, as well as adding configurations for any additional advanced features, becomes a staged and scheduled network modification and therefore does not inherit any downtime risk, Hakin notes.

IBN's unintended consequences

Intent-based networking (IBN) technology advances SDN an additional step. While SDNs have largely automated most network management processes, a growing number of organizations now require even greater capabilities from their networks in order to manage their digital transformation and ultimately assure that their network is operating as planned.

IBN allows administrators to set specific network policies and then rely on automation to ensure that those policies are implemented. "There's a lot of hype and misinformation around IBN," Smith says. "There's still some debate about what it actually can and can’t do ... so customers need to really investigate and spend time understanding what’s real and what’s theoretical," he cautions.

Adopting IBN without risking disruption requires a great deal of patience and practice, observes Tim Parker, vice president of network strategy at data center and colocation provider Flexential. "The more difficult part that almost outweighs the benefits is moving to ACI (application centric infrastructure) or new operating systems that support [IBN]," he explains. "For example, we automated our DDOS scrubbing based on NetFlow data from Kentik ... and Python scripts that react when a trigger or threshold is reached," he notes. "But it's far from [reaching] the true AI of making smart decisions based on learning the impacts of the last decision."

Andrew Wertkin, CTO of BlueCat a network technology firm, believes that IBN is far more than a technology transformation. "It also affects organization skill-sets, operations, compliance/governance and existing service level agreements." He recommends that organizations assess their readiness in all of these areas. "Don’t get over your skis," Wertkin advises. "Start small and focused."

Look before leaping to NFV

Network functions virtualization (NFV) abstracts network functions, allowing them to be installed, controlled and manipulated by software running on standardized compute nodes. "NFV and SDN free networks from their dependence on underlying physical hardware," says Bill Long, vice president of interconnection services at data center and colocation provider Equinix. Instead, network orchestration and control are managed through software, without specialized equipment confined to a specific location. "Companies can safely connect their networks to applications and cloud services wherever they are and compute resources can be turned up or down as needed," Long explains. "This greatly increases scalability and simplicity."

Troubles can arise when enterprises fail to fully think through their need for NFV. "A lot of companies are jumping into NFV without first justifying a business case," Smith says. "For example, if the business needs more flexibility at its branch locations, then using NFV to spin up new network services is a great idea."

Adoption disruptions can be minimized by ensuring clear communication between IT teams. "NFV decouples network functions from the underlying server hardware, which enables greater flexibility and elasticity," says Ashish Shah a vice president at Avi Networks, a data center and clouds application platform provider. "However, since the server team is responsible for maintaining and patching x86 servers, clear understanding and communications between the server and networking teams will be important."

Centralized policy and lifecycle management of virtual network function is another important consideration for long term success. Managing each instance of an NFV will be cumbersome and can obviate the benefits of the transition to NFV, Shah warns.

Migrating existing applications will require additional care to ensure that all application dependencies, scripts and policies are accounted for. "Taking steps to document these dependencies can reduce disruptions," Shah says.

In the bigger picture, "whether focusing on SDN, SD-WAN, IBN or NFV, it’s important to remember that with each new technology comes new tools, training, support and more," Smith says. "Ensuring your teams have the proper tools and training is critical to successful deployments and minimal disruption."

Read more...

What programming languages rule the Internet of Things?

Does the IoT run on the same programming languages that drive the rest of the technology world? Yes, mostly.

As the Internet of Things (IoT) continues to evolve, it can be difficult to track which tools are most popular for different purposes. Similarly, trying to keep tabs on the relative popularity of programming languages can be a complex endeavor with few clear parameters. So, trying to figure out the most popular programming languages among the estimated 6.2 million IoT developers (in 2016) seems doubly fraught — but I’m not going to let that stop me.

There’s not a lot information on the topic, but if you’re willing to look at sources ranging from Medium to Quora to corporate sites and IoT blogs, and you’re willing to go back a few years, you can pick up some common threads.

IoT Developer Survey: Top IoT programming languages

According to the Eclipse Foundation’s 2018 IoT Developer Survey, here are the top IoT programming languages:

  1. Java
  2. C
  3. JavaScript
  4. Python
  5. C++
  6. PHP
  7. C#
  8. Assembler
  9. LUA
  10. Go
  11. R
  12. Swift
  13. Ruby
  14. Rust

Those top four positions haven’t budged since the 2017 IoT Developer Survey, when Java, C, JavaScript, and Python topped the chart.

Looking deeper, though: The 2018 survey also ranked IoT programming languages by where the code will run: in IoT devices, gateways, or the cloud. For devices, C and C++ lead Python and Java, while for gateways, the order is Java, Python, C and C++. In the cloud, it’s Java, JavaScript, Python, and PHP.

Based on that data, according to Chicago-based software shop Intersog, “If it’s a basic sensor, it’s probably using C, as it can work directly with the RAM. For the rest, developers will be able to pick and choose the language that best suits them and the build.” Intersog also cited Assembly language, B#, Go, Parasail, PHP, Rust, and Swift as having IoT applications, depending on the task.

IoT programming languages that pay the most

Back in 2017, IoT World took a different approach, trying to suss out which IoT programming languages pay developers the most. The results?

“Java and C developers can, on average, expect to earn higher salaries than specialists in the other languages used in the IoT, although senior Go coders have the highest salary potential. Skilled Go developers are among the best paid in the industry, even though junior and mid-level Go developers earn modestly compared to their peers.”

App development firm TechAhead, meanwhile, named C, Java, Python, JavaScript, Swift, and PHP as the top six programming languages for IoT projects in 2017.

Finally, over on Quora, the arguments over IoT programming languages continue to rage, with one long-running thread attracting more than 20 answers ranging starting in 2015 and continuing through 2018 (Which programming languages will be most valuable in the IoT). The nominees mostly revolve around the usual suspects, with Java, Python, and C/C++ predominating.

A multilingual future for IoT?

Clearly, there’s a consensus set of top-tier IoT programming languages, but all of the top contenders have their own benefits and use cases. Java, the overall most popular IoT programming language, works in a wide variety of environments — from the backend to mobile apps — and dominates in gateways and in the cloud. C is generally considered the key programming language for embedded IoT devices, while C++ is the most common choice for more complex Linux implementations. Python, meanwhile, is well suited for data-intensive applications.

Given the complexities, maybe IoT for All put it best. The site noted that, “While Java is the most used language for IoT development, JavaScript and Python are close on Java's heels for different subdomains of IoT development.”

Perhaps, the most salient prediction, though, turns up all over the web: IoT development is multi-lingual, and it's likely to remain multi-lingual in the future.

Read more...

Cisco warns a critical patch is needed for a remote access firewall, VPN and router

Cisco puts Elasticsearch cluster, Docker/Kubernetes, Webex customers on guard, as well

The vulnerability, which has an impact rating of 9.8 out of 10 on the Common Vulnerability Scoring System lets a potential attacker send malicious HTTP requests to a targeted device. A successful exploit could let the attacker execute arbitrary code on the underlying operating system of the affected device as a high-privilege user, Cisco stated.

The vulnerability is in the web-based management interface of three products: Cisco’s RV110W Wireless-N VPN Firewall, RV130W Wireless-N Multifunction VPN Router and RV215W Wireless-N VPN Router. All three products are positioned as remote-access communications and security devices.

The web-based management interface of these devices is available through a local LAN connection or the remote-management feature and by default, the remote management feature is disabled for these devices, Cisco said in its Security Advisory.

It said administrators can determine whether the remote-management feature is enabled for a device, by opening the web-based management interface and choose “Basic Settings > Remote Management.” If the “Enable” box is checked, remote management is enabled for the device.

The vulnerability is due to improper validation of user-supplied data in the web-based management interface, Cisco said.

Cisco has released software updates that address this vulnerability and customers should check their software license agreement for more details.

Cisco warned of other developing security problems this week.

Elasticsearch

Cisco’s Talos security researchers warned that users need to keep a close eye on unsecured Elasticsearch clusters. Elasticsearch is an open-source distributed search and analytics engine built on Apache Lucene.

“We have recently observed a spike in attacks from multiple threat actors targeting these clusters,” Talos stated. In a post, Talos wrote that attackers are targeting clusters using versions 1.4.2 and lower, and are leveraging old vulnerabilities to pass scripts to search queries and drop the attacker’s payloads. These scripts are being leveraged to drop both malware and cryptocurrency-miners on victim machines.

Talos also wrote that it has identified social-media accounts associated with one of these threat actors. “Because Elasticsearch is typically used to manage very large datasets, the repercussions of a successful attack on a cluster could be devastating due to the amount of data present. This post details the attack methods used by each threat actor, as well as the associated payloads,” Cisco wrote.

Docker and Kubernetes

Cisco continues to watch a run-time security issue with Docker and Kubernetes containers. “The vulnerability exists because the affected software improperly handles file descriptors related to /proc/self/exe. An attacker could exploit the vulnerability either by persuading a user to create a new container using an attacker-controlled image or by using the docker exec command to attach into an existing container that the attacker already has write access to,” Cisco wrote.

“A successful exploit could allow the attacker to overwrite the host's runc binary file with a malicious file, escape the container, and execute arbitrary commands with root privileges on the host system,” Cisco stated.  So far Cisco has identified only three of its products as susceptible to the vulnerability: Cisco Container Platform, Cloudlock and Defense Orchestrator.  It is evaluating other products, such as the widely used IOS XE Software package.

Webex

Cisco issued a third patch-of-a-patch for its Webex system. Specifically Cisco said in an advisory that a vulnerability in the update service of Cisco Webex Meetings Desktop App and Cisco Webex Productivity Tools for Windows could allow an authenticated, local attacker to execute arbitrary commands as a privileged user. The company issued patches to address the problem in October and November, but the issue persisted.

“The vulnerability is due to insufficient validation of user-supplied parameters. An attacker could exploit this vulnerability by invoking the update service command with a crafted argument. An exploit could allow the attacker to run arbitrary commands with SYSTEM user privileges,” Cisco stated.

The vulnerability affects all Cisco Webex Meetings Desktop App releases prior to 33.6.6, and Cisco Webex Productivity Tools Releases 32.6.0 and later prior to 33.0.7, when running on a Microsoft Windows end-user system.

Details on how to address this patch are here.

Read more...

Dell EMC speeds up backups and restores in its storage appliances

With its new software, Dell EMC Data Domain on-premises restores are up to 2.5 times faster than prior versions.

Dell EMC has introduced new software for its Data Domain and Integrated Data Protection Appliance (IPDA) products that it claims will improve backup and restore performance from anywhere from 2.5 times to four times the previous version.

Data Domain is Dell EMC’s purpose-built data deduplicating backup appliance, originally purchased by EMC long before the merger of the two companies. The IPDA is a converged solution that offers complete backup, replication, recovery, deduplication, with cloud extensibility.

Performance is the key feature Dell is touting with Data Domain OS 6.2 and IDPA 2.3 software. Dell says Data Domain on-premises restores are up to 2.5 times faster than prior versions, while data restoration from the Amazon Web Services (AWS) public cloud to an on-premises Data Domain appliance can be up to four times faster.

In addition, Dell is making Native Cloud Disaster Recovery available across the entire IDPA family, enabling failover to a cloud environment with end-to-end orchestration and sparing customers from having to set up maintain a secondary site for disaster recovery.

Dell also now offers expanded Cloud Tier support for backups and restoration. It has added Google Cloud Platform and Alibaba Cloud, on top of support for AWS, Microsoft Azure, Dell EMC Elastic Cloud Storage, IBM Cloud Open Storage, and other cloud backup and storage services.

There is also a new Free-space Estimator Tool for Cloud Tier to assess how much capacity is needed on a cloud services provider, which should help customers get a rein on cloud storage costs.

For mid-sized companies, Dell has increased the storage capacity of its entry-level DD3300 appliance for up to 32TB capacity in a 2U appliance, and has added support for 10GbitE networking and Fibre Channel links to a Virtual Tape Library.

Finally, Data Domain Virtual Edition (running on x86 servers or in the public cloud) has added support for AWS GovCloud, Azure Government Cloud, and Google Cloud Platform.

Read more...