Quantcast
Channel: Data Definer
Viewing all 50 articles
Browse latest View live

NetApp Unveils FAS2500 Series

$
0
0
NetApp today launched the FAS2500 Series, its latest entry-class hybrid platform, with three new models: FAS2520, FAS2552, and FAS2554. These new systems replace the FAS2220, FAS2240-2, and the FAS2240-4, respectively. The FAS2500 Series will initially ship with Data ONTAP 8.2.2 RC1 (supporting either 7-Mode or clustered Data ONTAP).

The chassis, which has no single point of failure in high-availability (HA) configurations, is based upon the Storage Bridge Bay (SBB) industry standard and leverages much of the previous FAS2200 design. It includes 1+1 redundant AC power supplies with integrated cooling. It also meets RoHS/WEEE/REACH requirements.

As with the recent FAS8000 Series, NetApp has also redesigned the FAS2500 Series bezel for a consistent look-and-feel across the portfolio.

Let’s further explore each of these systems.

FAS2520

Each FAS2520 Processor Control Module (PCM) is powered by: a single 1.73 GHz Dual-Core Hyper Threading Intel “Jasper Forest” LC3528 CPU, an 800 MHz memory controller, and an Ibex Peak South Bridge. Each PCM also includes 18GB memory (16GB DDR3 physical memory + 2GB NVMEM).

Connectivity for the FAS2520 includes the following ports for each PCM:

  • 2 x Intel 82580EB Gigabit Ethernet ports (per PCM)
  • 4 x Intel X540 10GBASE-T ports (per FAS2520 PCM only)
  • 2 x Marvell 10/100/1000 Ethernet management ports (per PCM)
  • 2 x PMC-Sierra 6Gbps SAS QSFP ports (per PCM)
  • 1 x RJ45 console port (per PCM)
  • 1 x disabled USB port (per PCM)

In order for the two PCMs to communicate with each other, the FAS2520 includes the Mellanox MT25204 -- an internal InfiniBand component found within each PCM. This HA interconnect can also be found within the previous FAS2200 and FAS2000 storage systems.

FAS2552 AND FAS2554
Like the FAS2520, each FAS2552 and FAS2554 Processor Control Module (PCM) is powered by: a single 1.73 GHz Dual-Core Hyper Threading Intel “Jasper Forest” LC3528 CPU, an 800 MHz memory controller, and an Ibex Peak South Bridge. Each FAS255x PCM also includes 18GB memory (16GB DDR3 physical memory + 2GB NVMEM).

Connectivity for the FAS2552 and FAS2554 includes the following ports for each PCM:

  • 2 x Intel 82580EB Gigabit Ethernet ports (per PCM)
  • 4 x QLogic EP8324 10GbE or 16Gb FC ports (per PCM)
  • 2 x Marvell 10/100/1000 Ethernet management ports (per PCM)
  • 2 x PMC-Sierra 6Gbps SAS QSFP ports (per PCM)
  • 1 x RJ45 console port (per PCM)
  • 1 x disabled USB port (per PCM)

As with the FAS2520, the FAS2552 and FAS2554 include a Mellanox MT25204 InfiniBand device – internal to the PCM – to allow each PCM to communicate with each other.



One of the primary differences between the FAS2520 and the FAS255x models is the fact that the 2520 PCMs do not include support for the Unified Target Adapter 2 (UTA2). This flexible I/O adapter is only available with the FAS2552 and FAS2554.

BIOS
For all FAS2500 Series models, the system firmware is based upon the FAS2240 and FAS2040 BIOS code (originally designed by Phoenix Technologies). The boot loader continues to be based upon the original Broadcom code with customized portions by NetApp.

SHELF CONVERSIONS
The FAS2552 and FAS2554 can be converted to a storage shelf; however, existing SAS shelves cannot be converted to a FAS2500 system. It should also be noted that NetApp has no plans to support FAS2520 shelf conversions.

SUMMARY
The FAS2500 Series is available to quote and order immediately with Data ONTAP 8.2.2 RC1.

NetApp Debuts FAS8080 EX

$
0
0
NetApp today launched the new FAS8080 EX, its most powerful storage system, which replaces the existing FAS/V6250 and FAS/V6290. This new system will initially ship with Data ONTAP 8.2.2 RC1 (supporting 7-Mode or clustered Data ONTAP).

The 6U form factor of each FAS8080 EX chassis (12U per HA pair) is targeted towards large enterprise customers with business-critical applications and multi-petabyte cloud providers.

Let’s explore some of the technical details of this new system.

Unlike the FAS8020/40/60, FAS8080 EX configurations require the I/O Expansion Module (IOXM). Single-chassis configurations include a controller with IOXM. Dual-chassis configurations include two chassis, each with a controller and IOXM.

The new FAS8080 EX has been qualified with the DS2246, DS4246, DS4486, DS4243, DS14mk4, and the DS14mk2-AT disk shelves with IOM6, IOM3, ESH4, and AT-FCX shelf modules. Virtualized storage from multiple vendors can also be added to the FAS8080 EX -- without a dedicated V-Series “gateway” system -- with the new “FlexArray” software feature.

NetApp will not offer a separate FlexCache model for the FAS8080 EX.

Inside each FAS8080 EX Processor Control Module (PCM) is a dual-socket, 2.8 GHz Intel "Ivy Bridge" processor with 20 cores per controller, an Intel Patsburg-J SouthBridge, and 128GB of DDR3 physical memory (256GB per HA pair). NetApp states that Data ONTAP will take advantage of all 20 cores on each PCM at time of launch.

Unique to the FAS8080 EX is the amount of NVRAM: Each FAS8080 EX PCM includes 16 GB of NVRAM9 (32GB per HA pair) with battery backup. Should a power loss occur, the NVRAM contents are destaged onto NAND Flash memory; once power is restored, the resulting NVLOG is then replayed to restore the system. NVRAM9 is integrated on the motherboard and does not take up a slot.

The FAS8080 EX is built upon the Gen 3 PCI Express (PCIe) architecture for embedded devices (such as PCI bridges, Ethernet / Fibre Channel / InfiniBand adapters, and SAS controllers). HA Pair configurations include a total of 24 PCIe slots with x8 wide links.

Interestingly, the HA interconnect for the FAS8080 EX now leverages 40Gb QDR InfiniBand adapters; this is a modest improvement from the FAS/V6290’s 20Gb DDR InfiniBand interconnect.

According to NetApp, it is recommended that the FAS8080 EX should be connected to at least four on-board ports for the cluster interconnect. This recommendation allows clustered Data ONTAP to reach peak performance for remote workloads. It is also interesting to note that the FAS8080 EX even supports up to six interconnect ports with the addition of the X1117A-R6 adapter in slot 12.

However, using only two cluster interconnects is still supported.

As with previous FAS8000 systems, the FAS8080 EX includes support for the new Unified Target Adapter (UTA) 2 -- a storage industry first from NetApp. It supports 16Gb Fibre Channel (FC) or 10Gb Ethernet, providing future flexibility. Both ports must be set to the same "personality"; changing one UTA port will change the second port to the same personality.

The FC personality on the UTA2 will autorange link speeds from 16/8/4 Gb FC, but does not work at 2 or 1 Gbps. The 10GbE will not autorange below 10GbE speeds. It is important to note that UTA2 ports are not supported with older DS14 FC shelves or FC tape devices. To connect to DS14 shelves or FC tape, use either the X1132A-R6 or X2054A-R6 adapter.

The FAS8080 EX can hold a maximum of 1,440 drives (per HA system), with a maximum capacity of 5,760TB. The maximum amount of Flash Cache and Flash Pool (combined) capacity is up to 36TB per HA pair. Maximum aggregate size is 400TB and the maximum volume size is 100TB.

The FAS8080 EX is available to quote and order immediately with Data ONTAP 8.2.2 RC1.

NetApp Rebrands SSD-Only FAS Systems as All-Flash FAS

$
0
0
NetApp today rebranded its Fabric-Attached Storage (FAS) systems with only solid-state drives (SSD) as All-Flash FAS (or AFF) systems.

AFF systems can run any version of Data ONTAP that supports SSDs. However, NetApp plans to also offer five pre-configured bundles, starting June 23, that leverage the FAS8080 EX and FAS8060 with 200GB, 400GB, and 800GB SSDs.

But why is Data ONTAP good for flash?

I've asked Nick Triantos, one of our consulting systems engineers, to comment on why AFF is different. This is what he said:

"The biggest challenge for us is not how WAFL writes; in fact, that’s a real advantage. The biggest challenges for us have been:

Multi-core Optimizations– For a long time, Data ONTAP didn’t leverage multiple cores effectively. In fact, the project for multi-core optimizations started back with version 7.3 and has continued through the Data ONTAP 8 releases. I’m sure you’ve seen where one CPU was at 90% and the other at 20%! If the workload was hitting an ONTAP domain that would run on a single core, then your performance bottleneck was that particular core (90%). It didn’t matter if the other cores were underutilized. This has been addressed.

Metadata Management– When you leverage small block size like 4K, inherently, you create a ton of metadata you need to manage. In order to get to the data fast, you need even faster access to metadata. How do you access metadata faster? In memory. That’s why there’s a ton of memory in the FAS2500 and FAS8000 Series; so we can manage as much metadata as possible in DRAM.

Data Protection– This is actually related to the above. The AFF has more Data protection features than any flash (or non-flash) array in the market. While this is a good thing, there’s a trade-off. The tradeoff is longer I/O paths because metadata has to located and validated against the actual data blocks.

How do you protect against lost writes for example? What happens if I’m a trading firm and the SSD acknowledges that an SSD page has been written – when it was either not written at all or it was written to the wrong location? You just lost millions of dollars. Data ONTAP not only detects, but also protects and recovers from lost writes (which are a very insidious type of failure).”

I said, “Let’s talk more lost writes”. Here’s his response:

“Lost writes are a rare, but a very stealthy failure and the worst thing is you won’t know it happened until days or even months later. But once it happens, you just corrupted your data! Good luck trying to find out which backup or snapshot or replication point is not corrupted. Of course, all this additional data protection stuff comes with a trade-off.

On the other hand, claiming blazing speeds and just protecting against two drives losses is not sufficient to claim superior protection of data – especially when flash arrays are typically deployed for business critical, revenue generating applications. You have to have worked through all the failure modes and make sure you can protect against those failures.  We’ve hardened Data ONTAP over nearly 20 years of existence to provide a very high level of resiliency against all modes of failure in various combinations.”

To recap, NetApp AFF system bundles have:

  1. Larger memory
  • Larger read/write cache in FAS8000 more in-memory metadata
  • Faster NVRAM
    • Faster ACKs = lower response times
  • Significant Multi-core optimizations
    • Since Data ONTAP 7.3 to version 8.2+
  • Continuous Segment Size Cleaning (CSS)
    • Data ONTAP variable segment size  (4K-256K)
  • Intelligent Algorithms
    • Pattern Detection based Read-Ahead
    • Sequential reads same blk size (i.e 32k) & different blk sizes (4k,64k,4k,64k)
    • Strided reads: Start at Block N read Blk 10 & 12 but skip in between blk 11
    • Backwards reads: Start at Block N read -10 blocks
    • Multiple threads simultaneously reading from multiple locations

    AFF systems are available to quote and order today.

    Introducing NetApp Private Storage for Microsoft Azure

    $
    0
    0
    NetApp, Microsoft, and Equinix today introduced “NetApp Private Storage for Microsoft Azure”; a hybrid cloud infrastructure that links NetApp Storage with Azure Compute via Azure ExpressRoute.


    DETAILS
    The solution consists of several components.

    First, FAS Storage Systems, running either 7-­Mode or clustered Data ONTAP, must reside in a co­location facility that is an Azure ExpressRoute Exchange provider (such as Equinix). Even though both operating modes work, NetApp highly recommends clustered Data ONTAP.

    It is important to also note that NetApp is testing E-Series Storage Arrays with iSCSI to be a part of this solution.

    Next, the solution requires Azure ExpressRoute: a private connection that bypasses the public Internet. ExpressRoute connections offer faster speeds, more reliability, and higher security than typical connections. In fact, in tests by NetApp, it has been shown that ExpressRoute provides 36% better performance compared to VPN over public Internet connections.


    SUPPORTED REGIONS 
    According to the three vendors, the solution is currently available in two Azure regions:

    Azure US West (San Jose, California):
    • 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
    • 1ms - 2ms latency observed

    Azure US East (Ashburn, Virginia):
    • 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
    • < 1ms - 1ms latency observed

    As ExpressRoute is rolled out globally, NetApp will be testing latency in additional locations.


    CUSTOMER NETWORK REQUIREMENTS
    There are also several required features for the customer's network equipment within the Equinix colocation facility. NetApp does not certify specific network equipment to be used in the solution; however, the network equipment must support the following features:

    Border Gateway Protocol (BGP)
    BGP is used to route network traffic between the local network in the Equinix colocation facility and the Azure virtual network. 
    Minimum of two 9/125 Single Mode Fiber (SMF) Ethernet ports
    Azure ExpressRoute requires two physical connections (9/125 SMF) from the customer network equipment to the Equinix Cloud Exchange. Redundant physical connections protect against potential loss of ExpressRoute service caused by a failure in the physical link. The bandwidth of these physical connections can be 1Gbps or 10Gbps. 
    1000Base-T Ethernet ports
    1000BASE-T network ports on the switch provide network connectivity from the NetApp storage cluster. Although these ports can be used for data, NetApp recommends using 1GbE ports for node management and out-of-band management. 
    Support for 802.1Q VLAN tags
    802.1Q VLAN tags are used by the Equinix Cloud Exchange and Azure ExpressRoute to segregate network traffic on the same physical network connection.

    Other optional features for the solution include:

    Open Shortest Path First (OSPF) protocol
    OSPF protocol is used when there are additional network connections back to on-premises data centers or other NetApp Private Storage for Microsoft Azure solution locations. OSPF is used to help prevent routing loops. 
    QinQ (stacked) VLAN tags
    QinQ VLAN tags (IEEE 802.1ad) can be used by the Equinix Cloud Exchange to support the routing of the network traffic from the network to Azure. The outer service tag (S-tag) is used to route traffic to Azure from the Cloud Exchange. The inner customer tag (C-tag) is passed on to Azure for routing to the Azure virtual network through ExpressRoute. 
    Virtual Routing and Forwarding (VRF)
    Virtual Routing and Forwarding is used to isolate routing of different Azure Virtual Networks and the customer VLANs in the Equinix co-location facility. Each VRF will have its own BGP configuration. 
    Redundant network switches
    Redundant network switches protect from a loss of ExpressRoute service caused by switch failure. It is not a requirement, but it is highly recommended that redundant switches are used. 
    10Gbps Ethernet ports
    Connecting 10Gbps Ethernet ports on the NetApp storage to the switch provides the highest amount of bandwidth capability between the switch and the storage to support data access.

    NetApp also indicates that connectivity of FAS Storage to Azure Compute only supports IP storage protocols (SMB, NFS, and iSCSI) at this time.


    SCENARIOS
    There are several scenarios envisioned for this solution:

    • Cloudburst for peak workloads
    • Disaster Recovery
    • Dev/Test and Production Workloads
    • Multi-Cloud Application Continuity
    • Data Center Migration/Consolidation

    One of the more interesting scenarios is multi-cloud application continuity. For example, take two geographically-dispersed Microsoft SQL Server 2012 Availability Group (AG) nodes in an Active/Passive configuration.

    The primary SQL AG node is a Hyper-V virtual machine located in a Microsoft Private Cloud on the East Coast of the United States. The SQL AG node located in the Microsoft private cloud is connected to NetApp storage via iSCSI.

    The secondary SQL AG node is an Azure virtual machine located in a Virtual Network in the West US Region. The secondary SQL AG node is connected to NetApp Private Storage in the co-location facility via iSCSI over a secure, low latency, high bandwidth Azure ExpressRoute network connection. Additionally, a third SQL AG node could be located in an Amazon Web Services (AWS) compute node; providing further multi-cloud failover capability.


    SQL AG Replication occurs via a network connection between the on-premise private cloud and the Azure virtual network.

    In the case where there is a loss of a SQL node, SQL storage in the primary location, or a loss of an entire primary datacenter, the surviving SQL Availability Group database node database replicas are activated automatically.

    This application continuity model can be extended by using multiple Azure regions with NPS for Azure deployments -- each in different Azure regions.


    SUMMARY
    NetApp Private Storage for Microsoft Azure is immediately available through reseller partners and directly from NetApp, Microsoft, and Equinix in North America. The solution will be available in Europe and Asia in the near future.

    NetApp Adds Migration Feature to Automation Software

    $
    0
    0
    NetApp this week quietly added a new LUN migration tool to OnCommand Workflow Automation (WFA); its software to provision, migrate, or decommission storage.

    Intended to aid in the transition of LUNs from 7-Mode to clustered Data ONTAP (cDOT), this tool consists of a set of WFA workflows and associated applications that convert the 7-Mode LUNs to files, then mirror those files to a cDOT cluster, and finally convert the files back into LUNs on the cDOT destination.

    NetApp is expected to leverage this tool as a replacement to the DTA2800 appliance or host-based migrations, as it enables both online and offline LUN migrations:

    ONLINE: 
    Online migrations perform a LUN-to-file conversion via NFS hardlinks, which requires a temporary “staging volume” to hold a copy of the source LUN. The application and host are shut down before cutover. Cutover downtime is minimal, usually the time it takes to complete the host remediation activities -- regardless of LUN size.

    OFFLINE:
    Offline migrations leverage a FlexClone volume of the source to convert the LUNs into files and SnapMirror between this cloned volume and the clustered Data ONTAP target. This technique does not require a temporary staging volume, but the host (where the LUN is mounted) must be shut down.

    Interestingly, the conversion is actually performed via Windows PowerShell cmdlets. The tool allows per-volume or per-LUN migration, leaving the source data intact. Although Snapshot history is not copied, it preserves any storage efficiencies during offline migrations.

    The LUN Migration Tool requires several components, including:

    • OnCommand Workflow Automation (WFA) 2.2 RC1 or later
    • OnCommand Unified Manager Core Package 5.2 for 7-Mode
    • OnCommand Unified Manager 6.1 RC1 for clustered Data ONTAP

    The source storage system must be running Data ONTAP 7.3.x or 8.x 7-Mode. The staging storage system must be qualified to be the source of the clustered Data ONTAP Transition Data Protection (TDP) SnapMirror, and the destination storage system must be running clustered Data ONTAP 8.2.x.

    For more details, see NetApp Technical Report 4314 entitled Workflow Automation for SAN LUN Migration: 7-Mode LUN Transition to Clustered Data ONTAP 8.2.

    And Now for Something Completely Different

    $
    0
    0
    For nearly a decade, I have had the opportunity to be an advocate for the most extraordinary products, people, and partnerships that make up the NetApp ecosystem. My advocacy efforts -- shared via this blog, Twitter, conferences, analyst day, and even at NASDAQ -- have evolved far beyond the typical "bits and bytes" discussion.

    So today, it is with both excitement and sadness that I announce my transition from Avnet to a new role as Senior Product Marketing Manager at CommVault.

    I am astounded by the changes to the storage and data management industry in just the last decade: scale-out NAS, unified storage, flash, converged, and hybrid cloud have evolved from buzzwords to real-world implementations. In fact, I am convinced that we are in the middle of a transitional period that will result in a completely software-defined datacenter by the next decade.

    Yet one thing remains the same: data protection.

    It is no secret that unrelenting data growth is wreaking havoc upon the conventional approach to backup and recovery. I often see organizations challenged with faster recovery requirements, end-user computing, and the increased adoption of cloud computing.

    But where others see challenges, I see opportunity. And that's what progress is all about.

    It is this vision that leads me to CommVault. My primary focus will be to create and promote Simpana solutions for protecting, recovering, and optimizing enterprise applications from Oracle, SAP, VMware, Microsoft, and IBM. Of course, CommVault is also a strategic alliance partner of NetApp, so I am equally excited about the upcoming releases of NetApp SnapProtect.

    Again, it has been an incredible journey as the #1 NetApp Advocate. Here’s to an exciting future ahead for each and every one of us!

    Why We Love to DASH (And You Should, Too!)

    $
    0
    0

    Nobody likes slow backups.

    Take, for example, “Synthetic Full” backups. This type of backup allows for the creation of a new “full” backup image by combining a previous full backup with the associated incremental backups – essentially implementing an “Incremental Forever” strategy.

    Generally speaking, a synthetic full means less time is needed to perform a backup and system restore times are reduced.

    However, this is not always the case.

    When you introduce deduplicated data with a standard synthetic full backup, it requires reading, rehydrating, and reduplicating; all of which is time-consuming and resource-intensive. So how do you reduce your backup time from several hours to just a few minutes?

    Meet DASH (or, "Deduplication Accelerate Streaming Hash").

    At its lowest level, Simpana software is just combining the previous backups (since all of the blocks of a DASH Full backup already live on the disk media). Simpana simply creates a backup image that has pointers to existing disk blocks, updates the reference counts of these blocks in the Deduplication Database (DDB), and updates the objects in the Index Cache.

    This is fast. Very fast.

    In fact, we see 90% less disk I/O and significantly fewer system resources being consumed on the Media Agent with DASH Full backups.

    There's also absolutely no difference between a DASH Full and a regular full backup. In other words, restore operations are unaffected.

    Bottom-line: we love fast backups. And we love dedupe. Together, we believe you'll love backups that are fast AND efficient. For more information, visit CommVault.com.

    A New Way to Think About Oracle Backups

    $
    0
    0
    It’s no secret that protecting Oracle Database systems can be complex. Companies often dedicate a team of Oracle database administrators (DBAs) to create, test, and schedule Recovery Manager (RMAN) scripts for backup, recovery, archiving, and business continuance.

    But isn't there a better way?

    Yes there is. And it doesn't matter if it’s a single instance, Real Application Cluster (RAC), or ExaData system. On-premise or in the cloud. Or both.

    If you’re just beginning to build out a new Oracle Database environment, imagine:

    • Meeting (or exceeding) business service level agreements through tiered backup, restore, and archive with deep RMAN, RAC, and ExaData integration
    • Removing the need for DBAs to manage RMAN scripts

    If expanding or upgrading Oracle Database, you can: 

    • Reduce complexity and speed protection by eliminating duplicate processes with integrated Oracle Log management
    • Avoid purchasing a separate archiving product

    Finally, if your mandate is to “move to the cloud”, you can:

    • Reduce storage overhead by tiering infrequently accessed data to AWS S3, Glacier, or Microsoft Azure
    • Protect data starting from the source with in-stream encryption and then extend encryption to data “at-rest” in the cloud

    Let's chat if you’re at Oracle OpenWorld 2014 this week! Stop by Booth #2425 (near the “Nothin’ But Net” basketball experience).

    What The Walking Dead Teaches Us about Backup

    $
    0
    0
    If you’re a fan of the television series, The Walking Dead, you’re probably anticipating Season 5 starting October 12.

    Rumor has it that things won’t be any better for our heroes this season. Could there be something after the zombie apocalypse? No one knows; but what we do know is that individual groups escaped from the prison after its downfall, attempting to survive as they follow a line of railroad tracks to a supposed safe zone named Terminus.

    But could this plot also apply to enterprise backup?

    Season 5 of The Walking Dead opens with the Termites holding Rick, Daryl, Michonne, and most of the others in a railroad car. But, like the Termites situation, is it possible that your data protection strategy is being held hostage?

    For example: how do you protect large, high I/O, or non-Windows VMs with agentless backup?

    While application-aware agentless backup seems ideal, it’s far from reality. If you don’t have permanent agents, you may have to deal with account management issues. This means setting Local Admin privileges for each and every VM. Wouldn't this become unmanageable as you scale your environment?

    Further, if you run more than just Windows Servers, how do you backup Oracle, SAP, and DB2 in Linux or UNIX environments with Windows-only Volume Shadow copy services?

    And what about capacity? How do you keep costs low if your backup software is highly dependent upon 3rd party hardware products for deduplication or separate software components to forecast storage consumption? Often, these costs are overlooked during a proof-of-concept.

    Bottom-line: The Walking Dead teaches us to survive in the most extreme conditions. While you probably won't need to recover your data due to a zombie apocalypse, your backup strategy needs to be smarter than a mindless zombie. In the words of The Governor from The Walking Dead, "You can’t think forever. Sooner or later, you gotta make a move."

    What’s your move?

    Dystopia for the Disrupted?

    $
    0
    0
    The debate surrounding our digital lives being ‘disrupted’ is nearly as constant as the never-ending stream of status updates on social media. To digital natives, it’s a new age of transparency, always-on connectivity, and the democratization of knowledge. To the late adopters, it’s a bacchanal of slacktivism, smartphone addiction, and narcissistic culture.

    So is today’s technophobia a cultural dystopia for the disrupted?

    In many ways, our early 21st-century data-driven culture is just the beginning. We now measure metrics for experiences previously thought to be ‘unmeasurable’. We quantify aesthetic qualities of art, music, and design via computer algorithms. Corporate-earnings reports are now written in less than a second through software. This new wave of technology will be the catalyst to a post-industrialist future -- engineered increasingly by algorithms and intelligent agents rather than humans.

    For example, take the proliferation of ‘bots’ on Twitter. It has been estimated that approximately 24% of all tweets on Twitter originate from bots. Some tweet every word of the English language every 30 minutes, while others report earthquakes in the San Francisco Bay Area, and yet others self-compose poetic tweets.

    Another project, by the Machine Learning group at the University of Toronto, highlights deep learning by taking an image and converting it into a sentence describing the image. While not perfect, it’s a fascinating view into the possibilities of algorithmic image recognition.

    During the most recent earnings season for publicly-traded companies, corporate-earnings reports hit the newswires. Instantly, the data was compiled and passed through a proprietary algorithm. The software captured specific numbers (in the report) and matched them against its database of relevant information. The software, in milliseconds, produced a completely-written article; entirely indistinguishable from one written by a human.

    These are just a few of the early adopters in this new era that seek to amalgamate engineering, art, and news. And this is just the beginning!

    Among the growing hype of these new technologies, there is also growing skepticism of their anti-social effects.

    Fabian Giesen (a former video game developer) has expressed concern that virtual reality technology (such as Oculus Rift) is on a “sad trajectory of entertainment moving further and further away from shared social experiences”.

    Nintendo’s Shigeru Miyamoto also shares the view that virtual reality gaming will lead us down a path that further promotes isolation. In an interview by the Guardian, he states that virtual reality is “in direct contrast” with the design goals of the Wii U. He goes on to state, “I have a little bit of uneasiness with whether or not that’s the best way for people to play.”

    Nikolas Kompridis, a Canadian philosopher and political theorist, has also written about the dangers of genetic engineering, nanotechnology, and robotics. He warns that these technologies introduce unprecedented new challenges to humans, including the possibility of the permanent alteration of our biological nature.

    Despite those that are instinctually suspicious of new paradigms, technology continues to permeate and fundamentally alter nearly every aspect of most people's lives. It has the capacity for tremendous benefit, yet great harm. The challenge we face is not the dichotomy of humans vs. machines; rather, it is understanding how we coexist with technology to raise the human potential.
    Viewing all 50 articles
    Browse latest View live