Aberdeen Norm Global Infrastructure Income Fund Announcement of a Knowledge and Portfolio Update Secure Distribution Plan

prnewswire | August 31, 2020

Aberdeen Norm Global Infrastructure Income Fund Announcement of a Knowledge and Portfolio Update Secure Distribution Plan
Shares of Aberdeen Standard Global Infrastructure Income Fund (the "Fund") (NYSE: ASGI), a closed-end management investment company, commenced trading on July 29, 2020.  Today, the Fund is announcing its initial monthly distribution, payable on October 9, 2020, and the terms of its Stable Distribution Plan. Under its Stable Distribution Plan, the Fund will pay a fixed monthly distribution at an annualized rate of 6.5% on the initial public offering price of $20.00 for the 12 months ending September 30, 2021. In connection with the above plan, the first monthly distribution in the amount of US $0.1083 per share will be paid on October 9, 2020 to shareholders of record at the close of business on October 2, 2020.

Spotlight

Fordway is one of the UK's leading IT infrastructure specialists.  We have more than 20 years’ experience providing business infrastructure change and support to UK organisations in the public, private and not-for-profit sectors. Our clients include Comic Relief, the London Borough of Hillingdon and Intelligent Energy.

Related News

APPLICATION INFRASTRUCTURE

Mavenir announces the creation of an OpenRAN Acceleration Business Unit

businesswire | December 16, 2020

Mavenir, the business' just start to finish Network Software Provider, reports an extended Radio association as its industry initiative and progress proceeds in OpenRAN. Mavenir has been an early promoter for OpenRAN to drive transparency, development, and security in 5G and this extension will invigorate the Remote Radio Unit environment. The OpenRAN business relies upon accessibility of transporter grade, ORAN-agreeable radios that can meet the developing necessities and prerequisites of Public and Private Mobile organizations. To this end, Mavenir is reporting another specialty unit to create equipment and programming creation quality plans for OpenRAN radios, that will empower merchants/accomplices to construct radios for explicit business sectors. Mikael Rylander will lead this Business Group as SVP and GM, Radio Business Unit, to zero in on RRUs and RRU environment. “Our OpenRAN deployments and trials are gaining momentum globally. To further strengthen our roadmap and ongoing Radio Access strategic initiatives, I am pleased to announce the formation of an RRU Business Unit,” said Pardeep Kohli, President and CEO of Mavenir. “With our leadership in OpenRAN, we want to enable best-in-class partner ecosystem that can together deliver a portfolio of compelling OpenRAN radios. We will support this ecosystem with RRU software, expertise, facilitation of appropriate production sites and RRU validation with our vRAN products. These radios will be available for use by any OpenRAN vendor and not exclusively to Mavenir.” Likewise, Puneet Sethi will join Mavenir as SVP/GM, to lead the RAN Business Unit and spotlight on making and conveying market driving CU and DU Software arrangements. Mavenir's CU and DU arrangements are planned dependent on OpenRAN standards and are as of now sent with numerous OpenRAN consistent outsider radios. Staying focused on those OpenRAN standards, Mavenir's CU and DU programming arrangements will work with both Mavenir radio unit items and outsider radio units, consequently offering clients the broadest decision in the business. Puneet joins Mavenir most as of late from Qualcomm where he set up Small Cells and RAN Infra Business ground up to its current industry driving situation in 5G. He had the general obligation of the specialty unit P&L, execution, item guide and business improvement. While conveying huge business development, he effectively advanced FSM RAN items including baseband SoC, SW and RF through 3G/4G/5G industry advances to address the issues of Qualcomm's worldwide client base. Before his present job, he held different functions at Qualcomm including UWB business improvement and driving PHY programming group liable for Qualcomm's first LTE UE modem dispatch. He additionally brings assorted arrangement of telecom encounters from his earlier parts at Radioframe Networks, Comneon and Ubinetics. About Mavenir: Mavenir is the industry's only end-to-end, cloud-native network software provider. Focused on accelerating software network transformation and redefining network economics for Communications Service Providers (CSPs) by offering a comprehensive end-to-end product portfolio across every layer of the network infrastructure stack. From 5G application/service layers to packet core and RAN – Mavenir leads the way in evolved, cloud-native networking solutions enabling innovative and secure experiences for end users. Leveraging industry-leading firsts in VoLTE, VoWiFi, Advanced Messaging (RCS), Multi-ID, vEPC, and Virtualized RAN, Mavenir accelerates network transformation for more than 250+ CSP customers in over 120 countries, serving over 50% of the world’s subscribers.

Read More

Nokia and Google Cloud are collaborating strategically to transform Nokia's digital infrastructure

prnewswire | October 14, 2020

Nokia and Google Cloud today announced a five-year strategic collaboration that will see Nokia migrate its on-premise IT infrastructure onto Google Cloud. Nokia will migrate its data centers and servers around the world, as well as various software applications, onto Google Cloud infrastructure. The deal reflects Nokia's important operational shift to a cloud-first IT strategy and its aggressive efforts to strengthen and transform its digital operations globally in order to expand collaboration and innovation capabilities of Nokia employees and to enhance its delivery to customers. The agreement is expected to drive meaningful operational efficiencies and cost savings over time due to a reduction in real estate footprint, hardware energy consumption, and hardware capacity purchasing needs.

Read More

APPLICATION INFRASTRUCTURE

CEVA Redefines High Performance AI/ML Processing for Edge AI and Edge Compute Devices

CEVA | January 06, 2022

Consumer Electronics Show – CEVA, Inc.the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions, today announced NeuPro-M, its latest generation processor architecture for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor. An industry first, NeuPro-M supports both system-on-chip (SoC) as well as Heterogeneous SoC (HSoC) scalability to achieve up to 1,200 TOPS and offers optional robust secure boot and end-to-end data privacy. NeuPro-M is the latest generation processor architecture from CEVA for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor. NeuPro-M is the latest generation processor architecture from CEVA for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of Edge AI and Edge Compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialized co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks, boosting performance by 5-15X compared to its predecessor. NeuPro–M compliant processors initially include the following pre-configured cores: NPM11 – single NeuPro-M engine, up to 20 TOPS at 1.25GHz NPM18 – eight NeuPro-M engines, up to 160 TOPS at 1.25GHz Illustrating its leading-edge performance, a single NPM11 core, when processing a ResNet50 convolutional neural network, achieves a 5X performance increase and 6X memory bandwidth reduction versus its predecessor, which results in exceptional power efficiency of up to 24 TOPS per watt. Built on the success of its' predecessors, NeuPro-M is capable of processing all known neural network architectures, as well as integrated native support for next-generation networks like transformers, 3D convolution, self-attention and all types of recurrent neural networks. NeuPro-M has been optimized to process more than 250 neural networks, more than 450 AI kernels and more than 50 algorithms. The embedded vector processing unit (VPU) ensures future proof software-based support of new neural network topologies and new advances in AI workloads. Furthermore, the CDNN offline compression tool can increase the FPS/Watt of the NeuPro-M by a factor of 5-10X for common benchmarks, with very minimal impact on accuracy. "The artificial intelligence and machine learning processing requirements of edge AI and edge compute are growing at an incredible rate, as more and more data is generated and sensor-related software workloads continue to migrate to neural networks for better performance and efficiencies. With the power budget remaining the same for these devices, we need to find new and innovative methods of utilizing AI at the edge in these increasingly sophisticated systems. NeuPro-M is designed on the back of our extensive experience deploying AI processors and accelerators in millions of devices, from drones to security cameras, smartphones and automotive systems. Its innovative, distributed architecture and shared memory system controllers reduces bandwidth and latency to an absolute minimum and provides superb overall utilization and power efficiency. With the ability to connect multiple NeuPro-M compliant cores in a SoC or Chiplet to address the most demanding AI workloads, our customers can take their smart edge processor designs to the next level." Ran Snir, Vice President and General Manager of the Vision Business Unit at CEVA The NeuPro-M heterogenic architecture is composed of function-specific co-processors and load balancing mechanisms that are the main contributors to the huge leap in performance and efficiency compared to its predecessor. By distributing control functions to local controllers and implementing local memory resources in a hierarchical manner, the NeuPro-M achieves data flow flexibility that result in more than 90% utilization and protects against data starvation of the different co-processors and accelerators at any given time. The optimal load balancing is obtained by practicing various data flow schemes that are adopted to the specific network, the desired bandwidth, the available memory and the target performance, by the CDNN framework. NeuPro-M architecture highlights include: Main grid array consisting of 4K MACs (Multiply And Accumulates), with mixed precision of 2-16 bits Winograd transform engine for weights and activations, reducing convolution time by 2X and allowing 8-bit convolution processing with <0.5% precision degradation Sparsity engine to avoid operations with zero-value weights or activations per layer, for up to 4X performance gain, while reducing memory bandwidth and power consumption Fully programmable Vector Processing Unit, for handling new unsupported neural network architectures with all data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN) Configurable Weight and Data compression down to 2-bits while storing to memory, and real-time decompression upon reading, for reduced memory bandwidth Dynamically configured two level memory architecture to minimize power consumption attributed to data transfers to and from an external SDRAM To illustrate the benefit of these innovative features in the NeuPro-M architecture, concurrent use of the orthogonal mechanisms of Winograd transform, Sparsity engine, and low-resolution 4x4-bit activations, delivers more than a 3X reduction in cycle count of networks such as Resnet50 and Yolo V3. As neural network Weights and Biases and the data set and network topology become key Intellectual Property of the owner, there is a strong need to protect these from unauthorized use. The NeuPro-M architecture supports secure access in the form of optional root of trust, authentication, and cryptographic accelerators. For the automotive market, NeuPro-M cores and its CEVA Deep Neural Network (CDNN) deep learning compiler and software toolkit comply with Automotive ISO26262 ASIL-B functional safety standard and meets the stringent quality assurance standards IATF16949 and A-Spice. Together with CEVA's multi award-winning neural network compiler – CDNN – and its robust software development environment, NeuPro-M provides a fully programmable hardware/software AI development environment for customers to maximize their AI performance. CDNN includes innovative software that can fully utilize the customers' NeuPro-M customized hardware to optimize power, performance & bandwidth. The CDNN software also includes a memory manager for memory reduction and optimal load balancing algorithms, and wide support of various network formats including ONNX, Caffe, TensorFlow, TensorFlow Lite, Pytorch and more. CDNN is compatible with common open-source frameworks, including Glow, tvm, Halide and TensorFlow and includes model optimization features like 'layer fusion' and 'post training quantization' all while using precision conservation methods. NeuPro-M is available for licensing to lead customers today and for general licensing in Q2 this year. NeuPro-M customers can also benefit from Heterogenous SoC design services from CEVA to help integrate and support system design and chiplet development. About CEVA, Inc. CEVA is the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions for a smarter, safer, connected world. We provide Digital Signal Processors, AI engines, wireless platforms, cryptography cores and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence. These technologies are offered in combination with our Intrinsix IP integration services, helping our customers address their most complex and time-critical integrated circuit design projects. Leveraging our technologies and chip design skills, many of the world's leading semiconductors, system companies and OEMs create power-efficient, intelligent, secure and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial, aerospace & defense and IoT. Our DSP-based solutions include platforms for 5G baseband processing in mobile, IoT and infrastructure, advanced imaging and computer vision for any camera-enabled device, audio/voice/speech and ultra-low-power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and inertial measurement unit ("IMU") solutions for markets including hearables, wearables, AR/VR, PC, robotics, remote controls and IoT. For wireless IoT, our platforms for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6/6e (802.11n/ac/ax), Ultra-wideband (UWB), NB-IoT and GNSS are the most broadly licensed connectivity platforms in the industry.

Read More

Spotlight

Fordway is one of the UK's leading IT infrastructure specialists.  We have more than 20 years’ experience providing business infrastructure change and support to UK organisations in the public, private and not-for-profit sectors. Our clients include Comic Relief, the London Borough of Hillingdon and Intelligent Energy.