Here is our summary of a new study by Avanade on how growing familiarity eases workplace fears about generative AI!
Avanade study: Familiarity reduces workplace fears of AI!
A recent study by Avanade highlights a significant shift in workplace attitudes towards generative AI, offering valuable insights into how organisations can encourage employees to embrace generative AI.
The Avanade report explored the experience of 700 global employees at Avanade who used Copilot for Microsoft 365 through a seven-week study. This study represents the largest pilot of Copilot for Microsoft 365 conducted during the Early Access Program.
The study investigated the impact of the generative AI tool on six human metrics in the workplace: communication, creativity, trust, work satisfaction, belongingness, and organisational citizenship behaviour. Overall findings revealed increased structured creativity, idea sharing and problem solving, but limited improvements to spontaneously generated or original thinking.
Key findings from Avanade study on workplace fears of AI
The Avanade study sheds light on how familiarity with generative AI is alleviating workplace fears, leading to greater acceptance and integration. Here are key findings from the Avanade study…
Communication
Initially, 72% of employees were apprehensive about how Copilot for Microsoft 365 would handle meeting transcriptions. By the end of the study, 45% of employees felt less cautious about their communications, reflecting a shift towards more open dialogue as familiarity with the tool grew.
Creativity and Innovation
Avanade observed a 2% increase in overall creativity and innovation, with the score rising to 82%. Acceptance of new working methods improved from 80% to 89%. However, there was a slight decrease in preference for tasks requiring original thinking.
Trust
The study found that 88% of employees felt Copilot for Microsoft 365 aligned with corporate values, while 65% believed it matched their personal values. Satisfaction with the tool’s assistance was at 75%, but concerns about transparency (78%) and accountability (65%) highlighted the need for clearer decision-making processes and governance.
After implementing Copilot for Microsoft 365, 85% of employees reported a strong sense of accomplishment, and 80% remained engaged with their tasks. This suggests that the tool integrated smoothly into daily workflows without diminishing job satisfaction.
Belongingness
Team cohesion levels remained stable, with 85% of employees feeling consistent support from colleagues. Nevertheless, there was a 2% decline in the sense of belonging and team collaboration, indicating potential areas for improvement.Its seamless integration into Zoom Workplace not only provides a unified platform for communication and collaboration, it also ensures that users can access all their tools and documents in one place, streamlining workflows and enhancing overall efficiency.
Organisational Citizenship Behaviour
The study showed an increase in technological adaptability to 86%, suggesting that Copilot for Microsoft 365 promotes technical skill development. Altruism and courtesy remained steady at 88%, indicating the tool did not negatively impact the collaborative and supportive culture of the organization.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
Discover how Zoom Docs, the new AI-driven tool from Zoom, transforms team collaboration, document creation, and project planning for enhanced meeting effectiveness.
Zoom Docs transforms collaboration + meeting efficiency with AI
On August 5, 2024, Zoom officially launched Zoom Docs, a new AI-first solution designed to revolutionise the way teams collaborate and manage documents. As part of Zoom Workplace, this collaborative document solution with generative AI aims to enhance meeting effectiveness and streamline project planning.
Zoom Docs is our first Zoom Workplace product with generative AI built in from the ground up; it effortlessly transforms information from Zoom Meetings into actionable documents and knowledge bases, so teams can stay focused on meaningful work.
Zoom Docs is included at no additional cost with Zoom Workplace paid licenses, creating even more value for our customers. With AI Companion available every step of the way, Zoom Docs is purpose-built to empower people to ‘work happy’ and give them more time back in their day.
Zoom Docs represents a significant advancement in collaborative technology, combining the power of AI with the versatility of Zoom Workplace. Here is how it offers businesses a cutting-edge solution for enhancing team collaboration and productivity.
AI-Driven Document Creation
Zoom Docs leverages advanced artificial intelligence to assist in creating documents quickly and accurately. ith features like content generation, automated editing suggestions, and optimized formatting, users can create and refine documents faster and more accurately. The AI assists in structuring content, suggesting improvements, and ensuring consistency, making document management simpler and more effective.
Enhanced Team Collaboration
Thanks to its AI Companion, teams can collaborate in real-time on documents, share feedback instantly, and track changes seamlessly. The integration of AI tools facilitates smoother collaboration by automating routine tasks, highlighting important changes, and ensuring that all team members are on the same page.
Zoom Docs supports comprehensive project planning by providing tools for task management, scheduling, and progress tracking. AI-driven insights help teams prioritize tasks, manage deadlines, and allocate resources efficiently. This streamlined approach to project planning ensures that teams can stay organized and focused on achieving their goals.
Benefits for Meeting Efficiency
Meeting productivity is enhanced by allowing participants to access and edit documents directly within the Zoom meeting platform. This integration reduces the need for switching between applications, saving time and improving focus during meetings.
Seamless Integration Into Zoom Workplace
Its seamless integration into Zoom Workplace not only provides a unified platform for communication and collaboration, it also ensures that users can access all their tools and documents in one place, streamlining workflows and enhancing overall efficiency.
Zoom Docs is available beginning today for users of the Zoom Workplace app, version 6.1.6 or later, which can be downloaded from the Zoom website. It can also be accessed from the Docs web homepage or the Zoom Web App.
Zoom Docs with AI Companion is included with all paid Zoom Workplace plans*. Basic (free) users can create up to 10 shared docs and unlimited personal docs without AI Companion but can upgrade to Zoom Workplace Pro, Business, or Enterprise plans for access to AI Companion capabilities across Zoom Workplace, including Docs. Account owners and admins may enable or disable AI Companion.
*Note: AI Companion is included at no additional cost with the paid services assigned to Zoom accounts. However, it may not be available for all regions and industry verticals.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
Is Microsoft or CrowdStrike to blame for the global IT outage of Windows-based systems?! Take a look at the viral claims, and find out what the facts really are!
Claim : Microsoft Is Responsible For Global IT Outrage, Not CrowdStrike!
On Friday, 19 July 2024 – a day that will live in digital infamy, businesses and organisations worldwide were hit by an IT outage on their Windows-based systems. Inevitably, some people are blaming Microsoft for this debacle…
Circulating on WhatsApp : Very interesting to see how the media is playing down on the disaster.
Question remains “Not sure how microsoft is going to rollback the update or to install the patch as affected pcs have locked themselves out.”
S.L. Kanthan : Blue screens of death all over the world, from airports and ATM machines to grocery stores and even vending machines!
Thanks, Microsoft! 🤡 #outage
BRICS should create alternative operating systems. Very dangerous for the world to rely on one company.
Max Tegmark : With their impeccable safety culture, never letting things get deployed before they’re properly tested, I’m confident that @Microsoft is ready to safely handle the smarter-than-human AI that they’re trying to build with @OpenAI
Sumit Singh Rajput : This is serious! How a single tech outage can bring the world economy to its knees.
A glitch in Microsoft’s server caused a computer failure around the world. Today, everything is blue
Truth : CrowdStrike, Not Microsoft, Is Responsible For Global IT Outrage!
This appears to be complete misunderstanding of the global IT outage that’s happening only to systems and cloud services based on Microsoft Windows, and here are the reasons why…
Fact #1 : Global IT Outage Caused By CrowdStrike, Not Microsoft
Let me start by simply pointing out that the global IT outrage that started on Friday, 19 July 2024, was caused by CrowdStrike, not Microsoft.
Soon after the outage occurred, CrowdStrike announced (and again) that it was caused by a bug in an update to its Falcon threat detection system.
The IT outage notably did not affect all Microsoft customers and users, only those who purchased and installed CrowdStrike Falcon, which is an “endpoint detection and response” software. This kind of software is designed for large organisations, and that is why this global IT outage is mainly affecting those organisations.
The scale is massive, because CrowdStrike is a leading provider of Endpoint Detection and Response (EDR) software. However, home users and small business users are not affected, because they rely on the built-in Windows Defender software, or consumer-grade software from the likes of Norton and McAfee.
Blaming Microsoft for the buggy update that CrowdStrike issued would be like blaming BMW for defective third-party tyres that leak air, and asking the automotive company to replace or fix those tyres.
Fact #2 : Microsoft Denies Responsibility For Global IT Outage
A Microsoft spokesperson has officially denied responsibility for the global IT outage caused by the CrowdStrike update:
CrowdStrike update was responsible for bringing down a number of IT systems globally. Microsoft does not have oversight into updates that CrowdStrike makes in its systems.
Fact #3 : Global IT Outage Caused By Bug In CrowdStrike Update
As CrowdStrike explained (and again), the infamous Windows Blue Screen of Death (BSOD) that is caused by a bug in an update meant for Windows-based systems.
The outage was caused by a defect found in a Falcon content update for Windows hosts. Mac and Linux hosts are not impacted. This was not a cyberattack.
We are working closely with impacted customers and partners to ensure that all systems are restored, so you can deliver the services your customers rely on.
CrowdStrike further confirmed that the buggy code was introduced in a single channel file – C-00000291.sys, with the timestamp of 0409 UTC.
As former Google engineer Arpit Bhayani explained, the buggy code was trying to access an invalid memory location, triggering a panic and causing the BSOD.
I saw many engineers blaming the outage on Microsoft 🤦♂️ SWEs blaming without knowing the root cause is concerning.
It is not Microsoft, it is Crowdstrike who released an update for Windows that had a bug. The patch runs in Kernel mode to monitor system activity at a low level.
Because it was running in Kernel mode, the buggy code was trying to access an invalid memory location that triggered a panic and which showed Blue Screen of Death.
The name of the driver file that had the buggy update is “C-00000291.sys”, deleting it fixes the issue and unfortunately this needs to be done manually.
Microsoft has nothing to do with it.
Deleting the file, or replacing it with the previous or newer version, fixes the problem. However, it has to be done manually, as the affected computers and servers have “bricked” and cannot be remotely accessed.
Fact #4 : Microsoft Is Supposed To Vet Driver Updates
While Microsoft may not be responsible for the bug in the CrowdStrike update, some cybersecurity experts believe that it may hold some responsibility.
Costin Raiu who worked at Kaspersky for 23 years and led its threat intelligence team, says that Microsoft is supposed to vet the code and cryptographically sign it. This suggests that Microsoft may have also missed the buggy code in the CrowdStrike Falcon kernel driver update.
It’s surprising that with the extreme attention paid to driver updates, this still happened. One simple driver can bring down everything. Which is what we saw here.
Raiu also noted that past updates to Kaspersky and Microsoft’s own Windows Defender antivirus software have also triggered similar Blue Screen of Death crashes in previous years.
Please help us FIGHT FAKE NEWS by sharing this fact check article out, and please SUPPORT our work!
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
Avanade just unveiled new generative AI services to help organisations and their people work responsibly and innovate with artificial intelligence!
Avanade Launches New Generative AI Services!
July 10, 2023 – Avanade, the leading Microsoft solutions provider, today unveiled new services to help clients address the pressing challenge to ready their people, processes, and technologies for artificial intelligence (AI).
As generative AI technology continues to transform the way we work, live and conduct business in every part of the world and in every industry, expectations are growing exponentially.
Avanade’s Trendlines research reveals that 85 percent of organisations expect AI to increase revenue growth by 2025, with more than two-thirds anticipating AI to be responsible for up to 16 percent growth in global annual revenue. All this points to lucrative rewards for those that seize the growth opportunities of AI now.
Avanade’s new services will enable leaders to assess and monitor multiple business and IT domains to prioritise actions, so they can responsibly harness the benefits of AI.
However, the research also indicates organisations are not ready. Trendlines shows that only 36 percent of business and IT leaders are completely confident that their organisation currently has sufficient checks and balances in place to mitigate the potential risks and harms of AI. Meanwhile, nearly half (48 percent) admit to not having specific guidelines and/or policies put into effect yet for responsible AI.
There is no end point to AI readiness. As generative AI continues to reshape the global business landscape, the importance of adopting an AI-first mindset cannot be overstated.
To seize the growth opportunities of AI and mitigate risks for unintended consequences through continual change, leaders need to consider more than the technology implications of AI. Ultimately, AI-first is people-first.
– Jillian Moore, Global Head of Advisory at Avanade
Avanade’s new services have been designed to expedite business value derived from AI while cultivating an AI-first mindset:
The Avanade AI Organisational Readiness Framework provides a comprehensive assessment of an organisation’s business and IT areas. It offers detailed insights into AI readiness across people, processes, and technologies, enabling leaders to prioritise responsible actions for leveraging AI’s benefits.
The service includes executive coaching, tailored employee training, and an innovative “AI control tower” with cloud-based tools, dashboards, and knowledge resources. This ensures continuous AI readiness and empowers leaders to monitor and take real-time actions.
The Avanade AI Governance Quick Start Service addresses the crucial requirement for responsible AI governance. It enables leaders to translate corporate values into guidelines and practices for governing the ethical use of AI.
With a strong framework and methodology, organisations can proactively assess risks in AI projects and enhance their existing business and IT governance processes, policies, and behaviours to effectively manage and reduce AI risks across all functions.
These new services compliment Avanade’s existing portfolio of generative AI strategy workshops, with options ranging from a 2-hour workshop to a 6-week strategy Proof of Concept engagement.
As generative AI gains momentum in Southeast Asia, we see a profound shift in how people engage with technology and how organisations function. Avanade is committed to helping our clients navigate this dynamic landscape, focus on doing what matters, and fully embrace the potential of generative AI responsibly.
By evaluating their AI readiness and establishing an ethical technology framework, organisations can ensure their digital advancement aligns with their core values – enabling them to simultaneously prioritise innovation and responsibility.
Bhavya Kapoor, Southeast Asia Managing Director, Avanade
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
WithSecure is harnessing the power of the offensive security approach in tackling evolving cloud threats!
WithSecure Takes Offensive Security Approach For Cloud Threats!
In a shifting cybersecurity landscape, WithSecure (formerly known as ‘F-Secure Business’) is harnessing the power of offensive security in its co-security and co-monitoring products and services. This revolutionary approach is designed to anticipate and mitigate cyber threats by understanding them from an attacker’s perspective.
During the recent SPHERE security conference 2023 in Helsinki, Finland, WithSecure’s Chief Product Officer, Antti Koskela, shed light on this approach.
We’ve done identity assessments for many cloud-based companies, unveiling weaknesses in their cloud platforms.
Our offensive security approach is about understanding the attack surface of a cloud-based estate. We focus on the digital perimeter, which is crucial to reducing the overall attack surface.
Koskela went on to explain that WithSecure has distilled this insight into an innovative managed service offering called ‘attack surface management’. This service provides a comprehensive view of a company’s vulnerabilities, including IP addresses, port vulnerabilities, exposed APIs and web services, identity matters, patching levels and more.
With more open architecture, control over your attack surface becomes paramount. ‘Zero trust’ alone isn’t the answer as human errors happen. Our holistic approach helps mitigate this.
WithSecure’s product suite integrates various cloud-native solutions to deliver protection based on specific client requirements. This collaborative process, termed ‘co-security’, is driven by the security and business outcomes defined by the clients. Koskela emphasised the tripartite focus of their solution:
It’s about process, people, and technology. We collaborate to secure the outcomes, letting company directors steer the course of business.
Our WithSecure Elements platform is the cornerstone of our technology, built collaboratively with our clients.
Koskela acknowledged the evolution of the IT industry, from client-server in the ‘90s to hosted services in the 2000s, cloud computing in the 2010s and cloud-native in the 2020s. He underscored the need for a new security approach to match the evolving business environments:
The cloud offers agility, speed, cost-efficiency. But with new technologies come new security considerations.
WithSecure has been proactive, creating solutions for every technological shift – be it firewalling and endpoint protection during the hosted services era, or data security and VPNs for the cloud computing era.
And now, with the rise of cloud-native tech, we’re helping clients to understand and secure their digital perimeter through our offensive security approach.
WithSecure Chief Product Officer, Antti Koskela (left), and APAC Regional Director Yong Meng Hong (right)
WithSecure Elements Picking Up In APAC
Since its mid-2021 debut, WithSecure’s Elements platform has gained considerable momentum here in Malaysia and the broader Asia-Pacific region. This comprehensive cybersecurity platform has made its mark by providing organisations with a unified solution to their security needs.
Elements equips enterprises with the insight, adaptability, and technology to tackle evolving threats and changing business environments.
Offering unified endpoint protection across devices, clouds and servers, Elements consolidates everything from vulnerability management and collaboration protection to detection and response into one easy-to-navigate security console.
– WithSecure Asia-Pacific Regional Director Yong Meng Hong
Yong further emphasised that the cloud-based Elements platform provides real-time visibility across an entire IT infrastructure, simplifying how enterprises manage their cybersecurity.
Flexible licensing options, including fixed-term subscriptions and usage-based billing, ensure that organizations can tailor their cybersecurity services according to their specific needs.
Elements offers centralised management capabilities, giving IT managers a comprehensive overview of their enterprise’s IT infrastructure, enhancing their reassurance and control.
Today, WithSecure is globally recognised, trusted by a myriad of enterprises to safeguard against cyber threats, while also protecting tens of millions of consumers through over two hundred service providers and telecommunications partners.
For organisations looking to navigate the cloud’s security challenges, WithSecure’s offensive security approach could be just the safeguard they need.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
HUAWEI CEO Ren Zhengfei just warned of tough times ahead, and called for his company to go into survival mode!
Here is what you need to know…
HUAWEI CEO Ren Zhengfei Warns Of Tough Times!
On 22 August 2022, HUAWEI founder and CEO Ren Zhengfei posted a memo on its internal network, stating that “The entire company’s business policy should shift from the pursuit of expansion to the pursuit of profit and cash flow“.
In Ren’s opinion, the global economy will continue to decline and be in the doldrums for the next ten years, putting HUAWEI under great pressure.
He also said that it is unclear if HUAWEI can survive the 2023-2024 financial year, partially blaming its woes on a “continued blockade” by Western countries.
Therefore, he says HUAWEI must focus on its own survival, and shift from expanding to pursuing profit and improving cash flow over the next 3 years. To that end, the CEO said that HUAWEI will completely abandon its business in certain countries.
The HUAWEI CEO also said that the “chill” in business sentiment to be passed down to HUAWEI employees, linking their bonuses and promotions to business results. In addition, he called for responsibilities to be consolidated, which sounds like euphemism for job cuts.
This alarming memo by its CEO comes after HUAWEI – China’s largest company – saw revenue decline by 14% in the first three months of 2022, with its profit margin narrowing to 4.3% from 11.1% just a year ago.
The HUAWEI CEO appears to believe that China will experience a much longer and persistent economic slowdown spreading far beyond the current collapse of its real estate sector.
It is surprising to note that Chinese censors has not removed the memo, which suggests the HUAWEI CEO has enough clout to speak openly. His memo went viral on Chinese social media, and has been discussed and shared by more than 100 million Chinese netizens.
HUAWEI CEO Ren Zhengfei Tough Times Memo : Key Points
Here are the key points from HUAWEI CEO Ren Zhengfei’s “tough times” memo, machine-translated from NetEase:
Live with quality in the next three years
We have to see the difficulties faced by the company and the difficulties in the future. The next ten years should be a very painful historical period, and the global economy will continue to decline. Now, due to the impact of the war and the continued blockade and suppression by the United States, the world’s economy is unlikely to improve in the next 3 to 5 years. Coupled with the impact of the epidemic, there should be no bright spot in the world. Then the consumption power will be greatly reduced, which will not only put pressure on the supply, but also the pressure on the market.
Under such circumstances, Huawei’s overly optimistic expectations about the future will be lowered. In 2023 and even 2025, we must take survival as the main program to survive, and survive with quality. This slogan is very good. Every business must be carried out carefully.
If we have a little bit of hope in 2025 as planned, then we must first find a way to get through these three difficult years, and the basis of survival must be adjusted to focus on cash flow and real profits, not just sales revenue. The respite period of our life is 2023 and 2024. We are still not sure whether we can break through in these two years, so every mouth should not tell stories, but must talk about realization, especially when conducting business forecasts, do not No more illusions, telling stories to deceive the company, losses will be deducted from your food package, first of all, you must survive, and if you survive, you will have a future.
Blind investment business to shrink
The 2023 budget should maintain a reasonable pace, blind expansion, and blind investment in businesses should shrink or close.
The entire company must use the budget effectively, and cannot blindly close all projects. The manpower saved will go to the front line, continue to optimize the organization’s business, rationally staff the ICT infrastructure, or our black land granary. Competitive complex hardware platforms and complex software platforms, and the projects that hang on to ride on them must be picked out. The legion is to build a basic information platform and sell ICT better. The infrastructure is not for the ecology. The terminal is the foundation for our rise and breakthrough in the future, but it should not be blind. Now we need to narrow the front lines, concentrate our troops to fight the war of annihilation, and increase profits.
Huawei cloud computing should focus on supporting Huawei’s business development in a down-to-earth manner, and take the road of supporting the industrial Internet. Digital energy has increased investment in the strategic opportunity window, created greater value, contracted institutions, and strengthened combat teams. Smart car solutions cannot be rolled out on a complete front. It is necessary to reduce the research budget, strengthen the business closed loop, and take a modular approach to R&D, focusing on a few key components to be competitive, and the rest can be connected to others.
In addition to the main goal of continuous investment for survival and profitability, businesses that cannot generate value and profits in the next few years should be scaled down or closed, and human and material resources should be concentrated on the main channel. We must face the reality and not be too far away. The great ideal is to cut through the mess quickly, and the surplus personnel will be adjusted to the strategic reserve team, and then they will be combined into reasonable positions to grab food.
The peripheral business must be taken out of the strategic core. After the marginal business is taken out, we first evaluate whether it can be done well and how much resources are needed to do it well. If it is not possible to do a business with huge resource consumption, it is better to open it up and let others do it after closing it. We must do it, and if we do not do well enough, we will reorganize the combat team and change cadres. If there are some windows of opportunity, we expand the strategic resource pool of strategic reserve teams and cadres and experts, and form commandos to attack in the window of opportunity.
Adhere to seeking truth from facts, and shrinking in the market must be firm. We used to embrace the ideal of globalization and aspire to serve all mankind. What is our ideal now? Survive and earn a little wherever you have money. From this perspective, we need to adjust the market structure and study what can be done and what should be abandoned.
Give up part of the market
The first focus is on value market value customers, and the main force is used in the middle section of the normal distribution curve. In some countries, we completely give up in the market. We also have a fat meat market, and the people who used to eat bones are transferred to eat fat meat.
Second, for hard-fought countries and regions, as an assessment and training base for new cadres to be promoted in the future, some countries have low output. Although we still have to do it, should we no longer send soldiers to guard the top of the snow, because he will still be a soldier after he comes down? Xueshan Mountain is a test of people, and potential new cadres will have the opportunity to be promoted to army commander. Because it is easy to integrate a small country, he calculates from the estimated budget, contract, tender delivery, working hour quota, and the solution is all done together. When I came back, I passed 5 and 6 on the cadre’s resume and passed the threshold of 11 generals in one day. Of course, some employees have been on the frontier for a long time, and their incomes have decreased when they return to China. There is also the issue of children going to school.
Third, employees returning from overseas should have priority in obtaining skills training and job opportunities. Employees returning from overseas must have a period of protection to protect them from their posts and ensure that they receive certain training. The skills of personnel in difficult countries may be worse than those in China, because they do not have a realistic combat environment. How can they improve? Just like a soldier on the plateau, although he stood very high, he did not absorb any cosmic energy. They paid the price. They could not take the exam as soon as they returned to China, and then they were eliminated. No one wanted to go to such difficult areas, so we have to It is guaranteed that the returning employees have job arrangements and a relaxed study time. As for his ability to catch up during this period, it is another matter.
Financial planning needs to be done for cash flow. In times of crisis, the main purpose is to make blood. Although we say that it will improve in 2025, what if there are no shells by then? So that’s a wonderful plan, we have safe food measures.
The company has two major expenditures to distribute dividends to employees, including wage stability, which is to enhance internal confidence and cohesion, and repayment of loans to banks, which is to enhance the society’s trust in us. For projects that already have obvious potential risks, don’t have any illusions, and surface them as soon as possible, let the audit come to a conclusion, and quantify the risks. At the same time, it is still possible to continue to manage these assets that have been eliminated. Through management, risks can be turned into appropriate returns. We should not have a bubble in our hearts. When we look at the report, we will be very determined, but we will not actually be able to make money. to the money.
Let the chill pass to everyone
Consolidate responsibilities, and link bonuses, promotions and promotions to business results, so that the cold will be passed on to everyone.
First, in this year’s and next year’s assessment, the weight of cash flow and profit should be increased. It is better for sales revenue to decline a bit, but profit and cash flow to increase. The bonus for operating profit growth should be a little more, so as to encourage everyone to compete for profits.
Second, each responsibility center signs the assessment responsibility letter. The company should aim at the KPI-based reciprocal reward mechanism. The normal promotion and upgrading will remain unchanged next year, but the link with the equivalence of responsibility should be strengthened, which makes people feel cold year after year. , but we need to be patient and enthusiastic in our transition. I once told the executive board that the basic salary framework should not be changed. This is a rigid indicator, but if the employees are excellent, they can be promoted and upgraded, and the bonus can be very flexible. Why is it elastic? Encourage everyone to go to the battlefield to grab food, the front line is different from the agency.
Third, the bonuses of various businesses this year must widen the gap, and it is absolutely not allowed to force everyone to achieve the short-term goal of grabbing food. In the past, the company’s policy was basically to settle things down. Everyone didn’t feel the cold in winter. Everyone covered the quilt, but it was a little thicker and thinner. At the end of this year, businesses with more profit and cash flow will have more bonuses, and businesses that cannot create value will have very low bonuses. They don’t even force the business to commit suicide and pass on the cold.
At present, we have to survive, not to fight for our ideals. The legion competition is the year-end bonus, because the bonus is not given by the company, but the profit earned by the legion itself, and part of it is handed over to the company. If we cannot earn food, we must dare to Bonuses are not given because the basic income of the employees covers the necessities of life. Of course, some strategic businesses cannot create value in the short term, which we can determine through evaluation, but many marginal businesses with poor performance must be cut off. This is adjustment, consolidation, enrichment, and improvement.
Invest at all costs at the point of survival crisis
Quality is the primary productive force. We must adhere to this line. R&D must be responsible for the quality and performance of products, and we promise that service experts must have comprehensive capabilities. Poor quality products are a shame for R&D personnel. This sentence should be posted on On the wall of the R&D office area of the R&D office, the rate of network failures around the world is getting higher and higher, and an accident may destroy the trust system of the entire market.
100-1=0, for our research and development, the research and development of single boards, the research and development of single devices, and the research and development of systems must put quality first. Quality is the best support guarantee for R&D and manufacturing personnel to market service personnel. If the quality of the product is not good, it is equivalent to letting the brothers brave the hail of bullets, ice and snow, hot and sweltering heat, and the new crown virus is charging on the front line. Therefore, we need to establish a reverse assessment mechanism. The first-line reverse assessment should not only assess the service organization of the agency, but also extend to the product line. If the quality is not good in the office, then the front line should be narrowed and the competitiveness should be improved.
We need to improve the status of the service system, and service experts must have a comprehensive ability to judge the network experience of accidents. In the past, we paid more attention to R&D than service, but now we also need to pay attention to the service system.
Reasonable, scientific and reasonable control of inventory. We must change from panic-stricken self-rescue in the past to high-quality self-rescue. We must pay attention to reducing inventory reasonably, and do not cause insufficient profits and tight cash flow of the company due to excessive inventory, which constitutes our new crisis. We can invest at all costs on strategic key opportunities and existential crises, but we cannot spend money on non-strategic opportunities.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
Microsoft Teams has finally been restored after suffering a massive outage that lasted many hours!
Here was what happened…
Microsoft Teams Suffers Massive Outage!
On Wednesday, 20 July 2022, thousands of users were unable to access Microsoft Teams, and it continued into Thursdays, 21 July 2022.
This was a big problem, because Teams had become an integral part of daily operations for many businesses that had adapted to a hybrid work pattern during the COVID-19 pandemic.
Microsoft Teams users relied on the service to organise their workflow and communicate internally like making calls and messaging each other.
The MS Teams problem also affected other services downstream, with some users reporting issues with Microsoft Office 365 as well.
Microsoft acknowledged the downstream impact to multiple Office 365 services with Teams integration, like Microsoft Word, Office Online and SharePoint Online.
Why Microsoft Teams Suffered Such A Massive Outage!
After 1.5 hours after Teams went down, Microsoft announced that they found the root cause – “a recent deployment contained a broken connection to an internal storage device, which result in impact“.
That’s tech-speak for “we installed a system upgrade that pointed to a storage device that does not exist, so MS Teams stopped working“.
They quickly redirected traffic to “a healthy service to mitigate impact“, which have allowed unaffected users to continue using MS Teams, but it did not seem to help those who lost access.
Although they identified the root cause, restoration appears to be taking time. Two hours later, they could only report that “Microsoft Teams functionality is beginning to recover“, which they repeated two hours later.
In the meantime, affected MS Teams users are creatively expressing their “frustration” on social media…
Update @ 3:56 PM (GMT+8) : The Microsoft 365 team announced that Teams availability has “mostly recovered“, but “a few service features” still required attention.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
The Malaysia Ministry of Health has clarified that the MySejahtera app and its data was not sold to any private company.
Here is what you need to know!
Claim : MySejahtera Was Sold To Private Company!
Malaysian opposition leader Anwar Ibrahim claimed that MySejahtera will be sold to a private company – MySJ Sdn. Bhd. through direct negotiation.
The MySejahtera application was rolled out in April 2020, under the Malaysia Ministry of Health (KKM). It was built by KPISoft (now Entomo) as a corporate social responsibility (CSR) initiative.
According to his statement, the government appointed MySJ Sdn. Bhd. through direct negotiation to take over MySejahtera on 26 November 2021.
Then in December 2021, the Public Accounts Committee (PAC) proposed that the government should take over MySejahtera since it is now “an integral part of the national health system”.
KKM : MySejahtera Was NOT Be Sold To Private Company!
On 27 March 2022, the Malaysia Health Minister Khairy Jamaluddin issued a press statement, clarifying that the government did not sell MySejahtera to any private company.
Here are the key points of his statement on the claims that MySejahtera was sold to MySJ Sdn. Bhd. :
On 26 November 2021, the government decided that MySejahtera is owned by the government, and the Ministry of Health (KKM) was appointed as the main owner of the application.
The government did not pay KPISoft any money for the development of MySejahtera, which was carried out from 27 March 2020 until 31 March 2021.
This was based on the company’s offer to let the government use the app for one year for free, as a Corporate Social Responsibility (CSR) initiative.
After the CSR period ended on 31 March 2021, the government agreed to extend the use of MySejahtera, and work with KPISoft to expand its features.
On 26 November 2021, the government ordered KKM to form a Price Negotiation Committee comprising of stakeholder agencies to negotiate the purchase and service maintenance of MySejahtera for two (2) years.
The scope of the procurement and management of the MySejahtera app included operating MySejahtera, system development including additional modules, maintenance, datacenter management and third-party services like Google Map and Places API, as well as SMS services.
On 28 February 2022, the Ministry of Finance approved KKM’s procurement of the MySejahtera app.
MySejahtera data has been under KKM’s supervision from the first day it was used, and the data is processed according to KKM procedures.
KKM does not share MySejahtera data with any government agency, or private companies.
All data from the MySejahtera app are uploaded to a cloud server network, and can only be accessed by the MySejahtera app only.
In short, the MySejahtera app was not sold to any private company, and was purchased by the Ministry of Health with approval from the Ministry of Finance on 28 February 2022.
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
NTT Ltd just launched their fifth data centre in Malaysia – Cyberjaya 5 (CBJ5)!
Here is a quick look at what NTT Cyberjaya 5 offers!
NTT Launches Fifth Data Center In Malaysia – Cyberjaya 5
On 3 February 2021, NTT Ltd announced the launch of their fifth data center in Malaysia – Cyberjaya 5 (CBJ5).
Located within the NTT Cyberjaya Campus, this new 107,000 square feet data center is designed for hyperscalers and high-end enterprises in Malaysia’s growing digital economy.
CBJ5 supports 6.5 megawatts of flexible and scalable power, and boasts a Tier IV-ready, compact and modular design, with a cooling wall system that handles up to 15 kilowatts per rack.
NTT clients will have greater access to flexible, scalable and secure infrastructure in Malaysia – a regional data center hub.
“The demand for data storage and managed hosting services is expected to grow exponentially across Malaysia. This fifth data center will meet the expanding needs of organizations to reach their digital business objectives, in particular the FSI sector, as our data center complies with the Risk Management in Technology (RMiT) guideline set by Bank Negara Malaysia. We hope to play a key role in providing the vital data capacity at a high speed to keep Malaysia’s digital ecosystems and the digital economy ticking.” said Henrick Choo, CEO, NTT Ltd. in Malaysia.
NTT Cyberjaya 5 : Part Of Strategic ASEAN Hub
CBJ5 is connected to the existing Asia Submarine-cable Express (ASE) and Asia Pacific Gateway (APG) cable system, and will eventually be linked to the upcoming MIST cable system.
The MIST cable system will be available by end 2022 and it is a strategic joint venture for international submarine cables in South East Asia, with Orient Link Pte. Ltd.. It will enable NTT Ltd. to expand its offerings into India and beyond, while the ASE and APG cable systems provide global connectivity from Asia to United States.
This new expansion in Malaysia is part of NTT Global Data Centres division’s growth strategy. Malaysia is a prime data centre market in the ASEAN region, due to the abundant availability of resources, and favourable government policies.
“NTT places Asia Pacific as a tactical key region, and Malaysia – a strategic hub for the submarine cables operated by NTT such as the new MIST cable system, as well as the existing Asia Submarine-cable Express (ASE) and Asia Pacific Gateway (APG). Furthermore, CBJ5 will drive business opportunities in Asia through the upcoming MIST cable system which will link all our large-scale data centers in the region. Our continued commitment to Malaysia will help position NTT as a technologically innovative leader to address the industries of the future,” said Ryuichi Matsuo, Executive Vice President for NTT Ltd.’s Global Data Centers division.
“The pandemic also illustrated the importance of effective connectivity and reliable infrastructure to ensure business continuity. NTT’s global data center platform offers flexible, scalable and secure infrastructure along with a full-stack of customizable solutions that clients can utilize to support their digital transformation needs and maintain critical applications in a comprehensive, hybrid IT environment,” he concluded.
If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Hitachi Vantara just unveiled their 2020 HCI (Hyperconverged Infrastructure) portfolio, with updates to the Hitachi Unified Compute Platform (UCP). Here are the details…
Hitachi Vantara : 2020 HCI Portfolio Updates
Hitachi Vantara today unveiled their 2020 HCI (Hyperconverged Infrastructure) portfolio, featuring updates to Hitachi UCP HC and Hitachi UCP RS.
Faster provisioning with new Hitachi UCP Advisor
Certified support for SAP HANA workloads
New Intel Cascade Lake Xenon Refresh processors for increased performance
Enhanced lifecycle management capabilities for non-disruptive upgrades
The updated 2020 Hitachi Vantara HCI solutions unify cloud infrastructure management with interoperability across their customer’s environments, whether they are using traditional storage, HCI-powered hybrid clouds or public clouds.
These new HCI offerings include a scalable and simplified foundation for hybrid clouds, allowing customers to rapidly scale-out, when increased datacenter resources are required.
Unified Cloud Management
Customers have the flexibility to build a cloud infrastructure with seamless workload and data mobility across on-premises and public cloud environments.
Hitachi UCP Advisor accelerates provisioning up to 80% faster compared to previous HCI management tools and reduces management complexity across the environment.
Scalable Performance
Greater performance, scale and density support IT departments’ ability to rapidly scale data center resources for business-critical applications and lower operational overhead for better TCO.
The updated HCI platforms provide certified support for SAP HANA workloads on HCI. Intel Cascade Lake Refresh Xeon processors increase performance for workload consolidation while avoiding resource contention issues.
Hitachi HCI solutions help reduce CapEx and OpEx overhead with advanced automation and data efficiency technologies.
Simplified Consumption
Everflex from Hitachi Vantara provides simple, elastic and comprehensive acquisition choices for the entire Hitachi Vantara portfolio, including the UCP Family, with consumption-based pricing models that align IT spend with business use and help lower costs by up to 20% with pay-as-you-go pricing.
Customers can also accelerate time to production with pre-validated and optimized bundles and starter packs, including solutions enabling remote work.
2020 Hitachi Vantara HCI Portfolio : Availability
The 2020 Hitachi Unified Compute Platform HC and Hitachi Unified Compute Platform RS are available from Hitachi Vantara and their global network of business partners, with immediate effect.
If you like our work, you can help support us by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Google recently introduced Confidential Computing, with Confidential VM as the first product, and it’s powered by 2nd Gen AMD EPYC!
Here’s an overview of Confidential Computing and Confidential VM, and how they leverage the 2nd Gen AMD EPYC processor!
Google Cloud Confidential Computing : What Is It?
Google Cloud encrypts customer data while it’s “at-rest” and “in-transit“. But that data must be decrypted because it can be processed.
Confidential Computing addresses that problem by encrypting data in-use – while it’s being processed. This ensures that data is kept encrypted while in memory and outside the CPU.
Google Cloud Confidential VM, Powered By 2nd Gen AMD EPYC
The first product that Google is unveiling under its Confidential Computing portfolio is Confidential VM, now in beta.
Confidential VM basically adds memory encryption to the existing suite of isolation and sandboxing techniques Google Cloud uses to keep their virtual machines secure and isolated.
This will help customers, especially those in regulated industries, to better protect sensitive data by further isolating their workloads in the cloud.
Google Cloud Confidential VM : Key Features
Powered By 2nd Gen AMD EPYC
Google Cloud Confidential VM runs on N2D series virtual machines powered by the 2nd Gen AMD EPYC processors.
It leverages the Secure Encrypted Virtualisation (SEV) feature in 2nd Gen AMD EPYC processors to keep VM memory encrypted with a dedicated per-VM instance key.
These keys are generated and managed by the AMD Secure Processor inside the EPYC processor, during VM creation and reside only inside the VM – making them inaccessible to Google, or any other virtual machines running on the host.
Your data will stay encrypted while it’s being used, indexed, queried, or trained on. Encryption keys are generated in hardware, per virtual machine and are not exportable.
Confidential VM Performance
Google Cloud worked together with the AMD Cloud Solution team to minimise the performance impact of memory encryption on workloads.
They added support for new OSS drivers (name and gvnic) to handle storage traffic and network traffic with higher throughput than older protocols, thus ensuring that Confidential VM will perform almost as fast as non-confidential VM.
Easy Transition
According to Google, transitioning to Confidential VM is easy – all Google Cloud Platform (GCP) workloads can readily run as a Confidential VM whenever you want to.
Available OS Images
In addition to the hardware-based inline memory encryption, Google built Confidential VM on top of Shielded VM, to harden your OS image and verify the integrity of your firmware, kernel binaries and drivers.
Google currently offers images of Ubuntu v18.094, Ubuntu 20.04, Container Optimized OS (COS v81), and RHEL 8.2.
They are currently working with CentOS, Debian and other distributors to offer additional OS images for Confidential VM.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
AMD is on the roll, announcing more supercomputing wins for their 2nd Gen EPYC processors, including four supercomputers in the top 50 list, and ten in the top 500!
2nd Gen AMD EPYC : A Quick Primer
The 2nd Gen AMD EPYC family of server processors are based on the AMD Zen 2 microarchitecture and fabricated on the latest 7 nm process technology.
According to AMD, they offer up to 90% better integer performance and up to 79% better floating-point performance, than the competing Intel Xeon Platinum 8280 processor. For more details :
Here is a quick 7.5 minute summary of the 2nd Gen EPYC product presentations by Dr. Lisa Su, Mark Papermaster and Forrest Norrod!
AMD EPYC : Four Supercomputers In Top 50, Ten In Top 500!
Thanks to the greatly improved performance of their 2nd Gen EPYC processors, they now power four supercomputers in the top 50 list :
Top 50 Rank
Supercomputer
Processor
7
Selene
NVIDIA DGX A100 SuperPOD
AMD EPYC 7742
30
Belenos
Atos BullSequana XH2000
AMD EPYC 7H12
34
Joilot-Curie
Atos BullSequana XH2000
AMD EPYC 7H12
48
Mahti
Atos BullSequana XH2000
AMD EPYC 7H12
On top of those four supercomputers, there are another six other supercomputers in the Top 500 ranking, powered by AMD EPYC.
In addition to powering supercomputers, AMD EPYC 7742 processors will soon power Gigabyte servers selected by CERN to handle data from their Large Hadron Collider (LHC).
3rd Gen AMD EPYC Supercomputers
AMD also announced that two universities will deploy Dell EMC PowerEdge servers powered by the upcoming 3rd Gen AMD EPYC processors.
Indiana University
Indiana University will deploy Jetstream 2 – an eight-petaflop distributed cloud computing system, powered by the upcoming 3rd Gen AMD EPYC processors.
Jetstream 2 will be used by researchers in a variety of fields like AI, social sciences and COVID-19 research.
Purdue University
Purdue University will deploy Anvil – a supercomputer powered by the upcoming 3rd Gen AMD EPYC processors, for use in a wide range of computational and data-intensive research.
AMD EPYC will also power Purdue University’s community cluster “Bell”, scheduled for deployment in the fall.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
And here is why this is nothing more than yet another Internet hoax :
Only China Is Capable Of Doing This
The only country that has accomplished most of what was shared above is China, but it took them decades to erect the Great Firewall of China.
It’s not just the massive infrastructure that needs to be created, it also requires legislation to be enacted, and considerable manpower and resources to maintain such a system.
That’s why China is leaning heavily on AI and cloud computing capabilities to automatically and quickly censor information deemed “sensitive”.
However, no other country has come close to spending the money and resources on a similar scale, although Cuba, Vietnam, Zimbabwe and Belarus have imported some surveillance technology from China.
WhatsApp, Instagram + Facebook Messenger Have End-to-End Encryption
All three Facebook-owned apps are now running on the same common platform, which provides end-to-end encryption.
End-to-end encryption protects messages as they travel through the Internet, and specifically prevents anyone (bad guys or your friendly government censor) from snooping into your conversation.
There are cybercrime laws in most, if not every, country in the world. But they are all enacted by legislative bodies of some sort, not the police.
The police is the executive arm in a country, empowered to enforce the law. They do not have the power to create a law, and then act on it.
Even The Government Has Debunked It!
Just in case you are still not convinced, even the Malaysian government issued a fact check on this hoax, debunking it as fake news.
Basically, it states “The Ministry of Home Affairs has NEVER recorded telephone calls or monitored social media in this country“.
Please help us FIGHT FAKE NEWS by sharing this fact check article out, and please SUPPORT our work!
Please Support My Work!
Support my work through a bank transfer / PayPal / credit card!
Name : Adrian Wong Bank Transfer : CIMB 7064555917 (Swift Code : CIBBMYKL)
Credit Card / Paypal : https://paypal.me/techarp
Dr. Adrian Wong has been writing about tech and science since 1997, even publishing a book with Prentice Hall called Breaking Through The BIOS Barrier (ISBN 978-0131455368) while in medical school.
He continues to devote countless hours every day writing about tech, medicine and science, in his pursuit of facts in a post-truth world.
NTT Limited just announced that they will construct their 5th data centre at the NTT Cyberjaya Campus in Malaysia!
Here are the details…
NTT To Build 5th Data Centre In Cyberjaya, Malaysia
As part of their expansion plans, NTT will build their fifth data centre at their Cyberjaya campus in Malaysia. This Tier-4 ready, compact and modular data centre, called CBJ5, is scheduled to be complete in 2020.
Once it comes online, NTT says that it will provide their clients with “a flexible and scalable power and cooling solution coupled with an industry leading Service Level Agreement (SLA) that is the first of its kind in Malaysia.”
This announcement comes after Malaysia’s Budget 2020 was tabled, with a focus on driving economic growth through digital transformation. There will be additional grants and incentives for organisations to digitally transform their businesses.
As such, NTT believes there will be increased demand for their services, which CBJ5 will be ready to fulfil.
“CBJ5 is designed to meet the requirements of hyperscalers and high-end enterprises, especially those that require solid power management capabilities. CBJ5 is able to accommodate progressive power increments and cooling of up to 10kW/rack. This is revolutionary as it will allow our clients to maximize the power resources in their chosen data center,” said Henrick Choo, CEO, Malaysia for NTT Ltd.
NTT Data Centre Capabilities
With CBJ5, NTT aims to become the leading Digital Infrastructure Provider in Malaysia, attracting both domestic and global traffic into its carrier-neutral data centre campus that also includes NTT’s global Tier-1 IP network, Multi-Cloud Connect platform and domestic Internet Exchange.
The NTT Cyberjaya Campus features a high-density fibre network facilitating inter-connection among our clients to create a digital supply chain ecosystem.
Bank Negara Malaysia recently announced a set of guidelines defining Risk Management in Technology (RMiT) for financial institutions, meaning security becomes a crucial consideration as they are now responsible for the safety of the bank’s information infrastructure, systems and data.
NTT will address these new requirements by working together with financial institutions, so they are able to comply to BNM’s guideline. Henrick also stated that security will be heightened as the company grows.
“NTT’s physical data center access control will be increased to safeguard all data center blocks within the NTT Cyberjaya Campus. We will also be introducing smart security technology integrating Visitor Management Systems with facial recognition technology. Essentially, we will double our security cover with the two combined,” he added.
NTT clients are able to choose from multiple architectures from on-premise, to cloud and even multi-cloud. All solutions will come with managed security solutions to offer data protection while minimising business disruptions.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
At GTC China 2019, DiDi announced that they will adopt NVIDIA GPUs and AI technologies to develop self-driving cars, as well as their cloud computing solutions.
DiDi Adopts NVIDIA AI + GPUs For Self-Driving Cars!
This announcement comes after DiDi spliced out their autonomous driving unit as an independent company in August 2019.
In their announcement, DiDi confirmed that they will use NVIDIA technologies in both their data centres and onboard their self-driving cars :
NVIDIA GPUs will be used to train machine learning algorithms in the data center
NVIDIA DRIVE will be used for inference in their Level 4 self-driving cars
NVIDIA DRIVE will fuse data from all types of sensors – cameras, LIDAR, radar, etc – and use numerous deep neural networks (DNNs) to understand the surrounding area, so the self-driving car can plan a safe way forward.
Those DNNs (deep neural networks) will require prior training using NVIDIA GPU data centre servers, and machine learning algorithms.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
At Supercomputing 2019, Intel unveiled their oneAPI initiative for heterogenous computing, promising to deliver a unified programming experience for developers.
Here is an overview of the Intel oneAPI unified programming model, and what it means for programmers!
The Need For Intel oneAPI
The modern computing environment is now a lot less CPU-centric, with the greater adoption of GPUs, FGPAs and custom-built accelerators (like the Alibaba Hanguang 800).
Their different scalar, vector, matrix and spatial architectures require different APIs and code bases, which complicates attempts to utilise a mix of those capabilities.
Intel oneAPI For Heterogenous Computing
Intel oneAPI promises to change all that, offering a unified programming model for those different architectures.
It allows developers to create workloads and applications for multiple architectures on their platform of choice, without the need to develop and maintain separate code bases, tools and workflow.
Intel oneAPI comprises of two components – the open industry initiative, and the Intel oneAPI beta toolkit :
oneAPI Initiative
This is a cross-architecture development model based on industry standards, and an open specification, to encourage broader adoption.
Intel oneAPI Beta Toolkit
This beta toolkit offers the Intel oneAPI specification components with direct programming (Data Parallel C++), API-based programming with performance libraries, advanced analysis and debug tools.
Developers can test code and workloads in the Intel DevCloud for oneAPI on multiple Intel architectures.
What Processors + Accelerators Are Supported By Intel oneAPI?
The beta Intel oneAPI reference implementation currently supports these Intel platforms :
Intel Xeon Scalable processors
Intel Core and Atom processors
Intel processor graphics (as a proxy for future Intel discrete data centre GPUs)
Intel FPGAs (Intel Arria, Stratix)
The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. However, it is up to those vendors to create their own oneAPI implementations and optimise them for their own hardware.
Are oneAPI Elements Open-Sourced?
Many oneAPI libraries and components are already, or will soon be open sourced.
What Companies Are Participating In The oneAPI Initiative?
According to Intel, more than 30 vendors and research organisations support the oneAPI initiative, including CERN openlab, SAP and the University of Cambridge.
Companies that create their own implementation of oneAPI and complete a self-certification process will be allowed to use the oneAPI initiative brand and logo.
Available Intel oneAPI Toolkits
At the time of its launch (17 November 2019), here are the toolkits that Intel has made available for developers to download and use :
Intel oneAPI Base Toolkit (Beta)
This foundational kit enables developers of all types to build, test, and deploy performance-driven, data-centric applications across CPUs, GPUs, and FPGAs. Comes with :
[adrotate group=”2″]
Intel oneAPI Data Parallel C++ Compiler
Intel Distribution for Python
Multiple optimized libraries
Advanced analysis and debugging tools
Domain Specific oneAPI Toolkits for Specialised Workloads :
oneAPI HPC Toolkit (beta) : Deliver fast C++, Fortran, OpenMP, and MPI applications that scale.
oneAPI DL Framework Developer Toolkit (beta) : Build deep learning frameworks or customize existing ones.
oneAPI IoT Toolkit (beta) : Build high-performing, efficient, reliable solutions that run at the network’s edge.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
At the Samsung Developer Conference 2019, Samsung and IBM announced a joint platform that leverages Samsung Galaxy devices and IBM cloud technologies to introduce new 5G, AI-powered mobile solutions!
Here is what you need to know about this new Samsung-IBM AI IoT cloud platform, and the 5G AI-powered mobile solutions it’s powering for governments and enterprises.
Samsung – IBM AI IoT Cloud Platform For 5G Mobile Solutions!
Built using IBM Cloud technologies and Samsung Galaxy mobile devices, the new platform will help improve the work environment for employees in high-stress or high-risk occupations.
This will help reduce the risks to these public employees who work in dangerous and high-stress situations. This is critical because nearly 3 million deaths occur each year due to occupational accidents.
This new, unnamed Samsung-IBM platform will help governments and enterprises track their employee’s vitals, including heart rate and physical activity. This will allow them to determine if that employee is in distress and requires help.
The Samsung – IBM AI IoT Cloud Platform In Use
5G mobile solutions based on the new Samsung-IBM AI IoT platform is being piloted by multiple police forces to monitor their health in real-time, and provide situational awareness insights to first responders and their managers.
The platform can track in real time, the safety and wellness indicators of first responders equipped with Samsung Galaxy Watches and Galaxy smartphones with 5G connectivity.
It can instantly alert emergency managers if there is a significant change in the safety parameters, which may indicate the first responder is in danger of a heart attack, heat exhaustion or other life-threatening events.
This allows them to anticipate potential dangers, and quickly send assistance. This should greatly reduce the risk of death and injuries to their employees.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
At the Apsara Conference 2019, Alibaba Cloud announced that they will be introducing the 3rd Gen X-Dragon Architecture for their cloud servers!
Here is a quick PRIMER on the new 3rd Gen X-Dragon Architecture!
What Is X-Dragon?
X-Dragon – Shenlong in Chinese – is a proprietary bare metal server architecture developed by Alibaba Cloud for their cloud computing requirements.
Built around a custom X-Dragon MOC card, it delivers what Alibaba Cloud calls Elastic Compute Service (ECS) capability in a bare metal server.
The ECS bare metal instances it offers combine the benefits of bare metal servers, and virtual machines.
For example, it offers direct access to CPU and RAM resources without virtualisation overheads that bare metal servers offer, with the instant deployment and image migration capabilities of virtual machines.
The downsides? ECS bare metal instances, once deployed, cannot be upgraded or downgraded. In addition, if there is a hardware failure, a failover occurs and the data remains stored in the instance’s storage drives.
What’s New In The 3rd Gen X-Dragon Architecture?
Basically – SPEED.
[adrotate group=”2″]
According to Alibaba Cloud, the 3rd Gen X-Dragon architecture is able to increase Queries Per Second (QPS) by 30% and decrease latency by 60% in e-commerce scenarios.
In tandem, they also announced the 6th Gen ECS instance, which delivers a 20% boost in computing power, a 30% reduction in memory latency, and a 70% reduction in storage IO latency.
Not new, but also important is the fact that because it is cloud-native by design, it eliminates power wastage from idle bare metal servers. Alibaba Cloud claims that alone reduces the unit computing cost by 50%.
3rd Gen X-Dragon Architecture Availability
Alibaba Cloud will start rolling out the 3rd Gen X-Dragon architecture upgrade to millions of their cloud servers around the world from 2020 onwards.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Find out what cybersecurity experts from Dimension Data, Cisco and more think about cloud security, cyberattacks and mitigating them.
Dimension Data Expert Panels On Cyberattack Mitigation + Cloud Security
Freda Liu hosted the two expert panels with Cisco, Recorded Future, F5 and Cybersecurity Malaysia and Mark Thomas, Dimension Data’s VP of Cybersecurity.
The two expert panels addressed the chief concerns of their clients, namely on cloud security, and the mitigation of cyberattacks.
Dimension Data Panel #1 : Top Cyberattacks + Mitigation Tips
Enterprises are continuously experiencing cyberattack survey in today’s digital world. Challenges like compliance management, coin mining, web-based attacks, and credential theft have been seen over the past year.
In this session, the Dimension Data panel of experts will provide insights about top cyberattacks and shifting threat landscape. They also discussed best practices and practical measures you can take to bolster your cybersecurity defences.
Dimension Data Panel #2 : Security In The Cloud
Today, cybersecurity leaders’ jobs are made more difficult as the number of areas and ‘things’ that need to be secured is constantly increasing.
Your infrastructure is no longer just physical, it’s cloud, and hybrid too.
What are the people, process and tools you need in place to help improve your organisation’s resilience and embark on the journey to world-class cybersecurity?
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
On the second day of Dell Technologies World 2019, VMware CEO Pat Gelsinger shared his 2019 vision for VMware. Here is a sneak peek at the 2019 VMware strategy and plans!
Pat Gelsinger Reveals 2019 VMware Strategy + Plans!
VMware was a major force at Dell Technologies World 2019, demonstrating VMware’s great importance in the Dell Technologies family.
But that was not all, as Pat Gelsinger would soon reveal…
Pat Gelsinger Reveals 2019 VMware Strategy + Plans!
VMware CEO Pat Gelsinger comes from a “hardware” background, serving as Intel’s first Chief Technology Officer before taking over as President and CEO of EMC.
A point he makes in his talk about the Superpowers of Tech – Cloud, Mobile, AI/ML and Edge/IoT.
VMware’s vision still focuses on enabling any cloud resource and application, from the past or the present, to work on any device, while improving intrinsic security over time.
The hybrid cloud is the best answer for almost every single workload, because of three “laws” – the laws of physics, the laws of economics, and the laws of the land.
VMware and Dell Technologies are focusing on the hybrid cloud architecture with VxRail as its building block.
To bind together and manage disparate cloud and on-premise solutions with greater visibility, VMware is offering CloudHealth on all VMware Cloud solutions on Amazon, Azure and Google.
VMware is making great investments into Kubernetes as the “middleware for the cloud“.
VMware is partnering with Pivotal to make VMware PKS available as VMware Enterprise PKS, VMware Essential PKS and VMware Cloud PKS.
VMware is also rebuilding their security architecture and products, with AppDefense and vSphere Platinum, giving virtual machines an AI capability to learn the users’ behaviour, as well as end-to-end encryption throughout the network infrastructure.
The newly-announced Dell Unified Workspace leverages VMware’s Workspace ONE unified endpoint management to maintain the user’s devices in good health, while allowing them to seamlessly access any native, SaaS (Software as a Service), or internal application, with a single sign-on from any device.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The 2019 Virtustream Multicloud Survey results was just announced with some interesting insights into the multicloud strategies being employed globally. Here is our quick primer on what the study revealed!
The 2019 Virtustream Multicloud Survey
Virtustream, a Dell Technologies company, commissioned Forrester Consulting to conduct their latest global survey of the multicloud strategy of more than 700 companies globally.
Virtustream commissioned the survey to study the current state of enterprise IT strategies for cloud-based workloads.
The 2019 Virtustream Multicloud Survey Summarised
The survey report, titled Multicloud Drive Mission-Critical Benefits, found that :
almost all (97%) of those companies used multicloud strategies in their mission-critical applications.
two thirds use multiple vendors for mission-critical workloads
multicloud deployments will increase over the next few years, with businesses expanding their multicloud budgets for staffing, investments and training
nearly 90% of companies surveyed will maintain or increase their budget to boost their multicloud deployments
nearly 75% of companies surveyed are using multiple cloud providers for mission-critical applications
nearly 61% of companies surveyed were concerned about security and management of their multicloud strategies.
Benefits Of Multicloud Strategies For Mission-Critical Applications
A majority of business organizations shared that multicloud strategies was used in mission-critical cases that involved customers’ financial data or sales applications.
In fact, the survey found that nearly 75% of business organisations are using about 2-3 cloud providers for business-critical applications.
Among the main benefits of using multicloud strategies were quick and efficient response to business changes and challenges.
An increased performance and savings in operational costs were also counted as additional benefits.
Security + Management Concerns In The 2019 Virtustream Multicloud Survey
Management and deployment of multicloud strategies are complex, which is why many business organisations face issues with its implementation.
Although nearly 61% respondents admit that multicloud strategies complements their own business objectives, there were still concerns of security and management.
Thus, many companies are planning to increase qualified and skilled staff to support their multicloud strategies, and work with cloud vendors with proper expertise and experience.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
A host of new Microsoft Azure technologies for developers have been announced at the Microsoft Build 2019 conference, which took place in Seattle. Here is a primer on what they announced!
Microsoft Build 2019 : New Azure Technologies Unveiled!
With nearly 6,000 developers and content creators attending Microsoft Build 2019 in Seattle, Microsoft announced a series of new Azure services like hybrid loud and edge computing to support them. They include advanced technologies such as,
Artificial Intelligence (AI)
Mixed reality
IoT (Internet of Things)
Blockchain
Microsoft Build 2019 : New Azure AI Technologies
First of all, they unveiled a new set of Microsoft Azure AI technologies to help developers and data scientists utilize AI as a solution :
Azure Cognitive Services, which iwll enable applications to see, hear, respond, translate, reason and more.
Microsoft will add the “Decision” function to Cognitive Services to help users make decisions through highly specific and customized recommendations.
Azure Search will also be further enhanced with an AI feature.
Microsoft Build 2019 : New Microsoft Azure Machine Learning Innovations
Microsoft Azure Machine Learning has been enhanced with new machine learning innovations designed to simplify the building, training and deployment of machine learning models. They include :
MLOps capabilities with Azure DevOps
Automated ML advancements
Visual machine learning interface
Microsoft Build 2019 : New Edge Computing Solutions
Microsoft also aims to boost edge computing by introducing these new solutions:
Azure SQL Database Edge
IoT Plug and Play
HoloLens 2 Developer Bundle
Unreal Engine 4
Microsoft Build 2019 : Azure Blockchain Service
The Azure Blockchain Workbench, which Microsoft released last year to support development of blockchain applications, has been further enhanced this year with the Azure Blockchain Service.
Azure Blockchain Service is a tool that simplifies the formation and management of consortium blockchain networks so companies only need to focus on app development.
J.P Morgan’s Ethereum platform was introduced by Microsoft as the first ledger available in the Azure Blockchain Service.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
One of the biggest stories from Dell Technologies World 2019 was the new Microsoft – Dell partnership. Microsoft CEO Satya Nadella himself joined Michael Dell to talk about it!
The 2019 Microsoft – Dell Partnership
Dell Technologies and Microsoft have been partners for a long time now, but at Dell Technologies World 2019, they announced that they were expanding their partnership in a number of digital transformation solutions.
Let’s take a look at what the components of the 2019 Microsoft – Dell partnership…
Azure VMware Solutions – VMware On Azure!
For the first time ever, Microsoft will offer VMware on Azure! The new Azure VMware Solutions are built on VMware Cloud Foundation, and deployed in Azure.
This would allow companies to capitalise on VMware’s trusted cloud infrastructure and the mission-critical performance of Microsoft Azure.
Hybrid Cloud Connectivity
With Azure VMware Solutions, customers will be able to seamlessly migrate, extend and run existing VMware workloads from on-premises environments to Microsoft Azure without the need to re-architect applications or retool operations.
They will also be able to build, run, manage, and secure new and existing applications across VMware environments and Microsoft Azure, while extending a single model for operations based on established tools, skills and processes as part of a hybrid cloud strategy.
Tapping Into Azure Capabilities
Azure VMware Solutions enable organisations to tap into Azure’s scale, security and fast provisioning cycles to innovate and modernise applications while also improving performance.
By integrating with native Azure services, customers can easily infuse advanced capabilities like AI, Machine Learning, and IoT into their applications enabling new, intelligent experiences.
Metal-as-a-Service
Azure VMware Solutions are first-party services from Microsoft developed in collaboration with VMware Cloud Verified partners CloudSimple and Virtustream (a Dell Technologies company).
Both CloudSimple and Virtustream run the latest VMware software-defined data center technology.
This ensures that customers enjoy the same benefits of a consistent infrastructure and consistent operations in the cloud as they achieve in their own physical data center, while allowing customers to also access the capabilities of Microsoft Azure.
New Microsoft – Dell Workspace Solutions
The 2019 Microsoft – Dell partnership also sees a collaboration in digital workspace solutions between Microsoft and both Dell Technologies and VMware.
Microsoft 365 + VMware Workspace ONE
Customers who use both Microsoft 365 and VMware Workspace ONE will now be able to leverage both solutions to maximise their investments.
Specifically, they will be able to use Workspace ONE to manage and secure Office 365 across devices through cloud-based integration with Microsoft Intune and Azure Active Directory.
Dell Provisioning Services Integration
Through the new Dell Technologies Unified Workspace, customers can leverage the integration of Microsoft Windows Autopilot and Dell Device Provisioning and Deployment Services, like Dell ProDeploy – all enabled by the integration of Microsoft 365, Workspace ONE, and Dell Provisioning Services.
Windows Virtual Desktop
Microsoft also announced Windows Virtual Desktop, the only service that delivers a multi-session Windows 10 experience, optimisations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps.
As a part of this agreement, VMware will extend the capabilities of Microsoft Windows Virtual Desktop to enable customers to further accelerate their cloud initiatives, leveraging VMware Horizon Cloud on Microsoft Azure.
Initial capabilities are expected to be available as a tech preview by the end of calendar year 2019.
VMware + Azure Integration
Microsoft and VMware are also exploring initiatives to drive further integration between VMware infrastructure and Azure such as integration of VMware NSX with Azure Networking and integration of specific Azure services with VMware management solutions.
They will also be exploring bringing specific Azure services to the VMware on-premises customers. Through this collaboration, the companies aim to give customers a more seamless experience across VMware and Azure environments.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
SAP and the ASEAN Foundation just announced that application for the 2019 ASEANDSE | ASEAN Data Science Explorers programme is now open! Here are the full details!
What Is ASEANDSE | ASEAN Data Science Explorers?
The ASEANDSE (ASEAN Data Science Explorers) programme is a joint collaboration between SAP and the ASEAN Foundation. It aims to promote and galvanise the use of data science amongst ASEAN tertiary students.
It aims to do this through two key activities – a series of enablement sessions, and a data analytics competition. Since its introduction in 2017, ASEANDSE has empowered over 5,000 youths from 287 higher education institutions in the ASEAN region.
The 2019 ASEANDSE Programme
The 2019 ASEANDSE programme will be carried out from February to October 2019. It starts with enablement sessions that are designed to improve the data analytics skills and knowledge of both students and lecturers at local institutions of higher learning across the ASEAN region.
These enablement sessions will be followed by a national, and then regional, data analytics competition. At these competitions, student teams of two will present their data-driven proposals using the SAP Analytics Cloud service.
Their ASEANDSE competition proposals must tackle issues affecting their country or ASEAN in general, according to these UN Sustainable Development Goals :
Good health and well-being
Quality education
Gender equality
Decent work and economic growth
Industry, innovation and infrastructure
Sustainable cities and communities
One team from each ASEAN member state will be crowned as the national finalist before advancing to the 2019 ASEANDSE regional finals, which will be held in Bangkok on 16 October 2019.
There, the 10 national finalists will be given the opportunity to present their winning ideas to a panel of judges made up of distinguished representatives from the ASEAN Foundation, SAP and various government officials and selected NGO organisations.
Where To Join The 2019 ASEANDSE Programme
The 2019 ASEANDSE programme is now open for registration, until 10 May 2019. Here are the eligibility requirements :
Nationals of ASEAN member countries (ie. Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam)
Full-time tertiary students currently pursuing their Diploma or Undergraduate studies in one of the tertiary institutions in Southeast Asia.
Above the age of 16 as at the start of the Contest Period. Participants under the age of 18 must obtain parental consent. The consent form is available upon registration.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Microsoft Ignite 2019 is here in Sydney! It was sold out, of course, but we managed to get a ticket.
So here is our quick tour for everyone else who could not make it, but wanted to see what it’s all about!
Microsoft Ignite 2019 @ Sydney
The Microsoft Ignite 2019 @ Sydney is a Microsoft tech conference for developers and tech professionals. For two days, it offers skill-building workshops, networking opportunities and access to top Microsoft engineers. No wonder tickets were quickly sold out!
For developers and tech professionals who want to learn how to better leverage Microsoft’s cloud services, the 2019 Microsoft Ignite is a rare opportunity to learn directly from those who build Power BI, Microsoft 365, and Azure.
Building your Applications for the Cloud – Learn how to architect and build your applications to take advantage of the scale that the cloud offers.
Deploying your Desktop & Infrastructure – Discover how to begin your journey to the cloud and what steps you need to both move your infrastructure and deploy your desktop.
‘Getting the most of your Data – Learn how to use AI and Machine Learning to gain new understanding using existing data within your organisation.
Migrating Applications to the Cloud – Learn what it takes to modernise your applications and prepare them for successful migration to the cloud.
Optimise Teamwork in your Organisation – Reveal the collaborative workforce within your business and learn how to build effective teams.
Securing your Organisation – Build a secure organisation without compromising the productivity of your business.
Microsoft Ignite 2019 @ Sydney : A Quick Tour!
Microsoft Ignite 2019 @ Sydney was held in the International Convention Center Sydney (ICC Sydney) over two days – 13th and 14th February 2019. Yes, even on Valentine’s Day. These are truly dedicated professionals!
The Microsoft Ignite 2019 @ Sydney covered two halls, as well as several lecture halls. In our quick tour, we take a look at the Hub, where participants get to interact directly with Microsoft professionals and their partners.
Note : We intentionally recorded our tour while most participants were inside the lecture halls for the workshops. Otherwise, the halls would be swarmed with people, making it difficult for us to record our tour.
The 2019 Imagine Cup Asia Showcase
As the 2019 Imagine Cup Asia concluded a day earlier, Microsoft also took the opportunity to showcase the projects of the 12 top Asian teams.
We highly recommend you check out what these young entrepreneurs came up with this year. Be inspired by their creativity!
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
On 30 October 2018, Dimension Data held a forum on Living in a Multi-Cloud World with F5 Networks. We had the exclusive opportunity to interview three key Dimension Data executives. Find out what we learned!
Dimension Data On Living In A Multi-Cloud World!
The Living In A Multi-Cloud World forum had a tagline of Connect, Automate, Secure. Let’s find out from Andy Cocks (CTO of Dimension Data Asia Pacific), Sandy Woo (Solutions Director of Dimension Data Malaysia), and Neville Burdan, Director of Cybersecurity, Dimension Data Asia Pacific), exactly what they mean!
Here are some key excerpts from the interview :
A key problem for many organisations is their inability to hire and retain talent to manage their infrastructure, so it would be cheaper and more efficient to leverage multi-cloud platforms
Managing a company’s digital requirements is getting exponentially complex, so the choice is to either outsource the job, or automate the various tasks.
There is also a lack of awareness about the advantages of adopting a multi-cloud platform, coupled with a lack of hyperscalers in Malaysia
The Dimension Data Managed Cloud Platform, which launched in July 2018, attempts to bridge the gap between demand and availability of cloud computing and storage, with multi-cloud support.
Multi-Cloud On Dimension Data Managed Cloud Platform
The Dimension Data Managed Cloud Platform, introduced in Malaysia in July 2018, is multi-cloud capable. Its Managed Services Operation Portal allows you to provision, deploy, monitor, and manage multiple instances from other cloud service providers like Microsoft Azure and Amazon Web Services.
Enterprise-Grade Security & Reliability
The Dimension Data Managed Cloud Platform offers enterprise-grade security and reliability, backed by a modern data centre with robust backup and disaster recovery capabilities to ensure business continuity. This allows their clients to create a hybrid environment that is highly scalable and responsive to their business needs, without worrying about security or reliability.
Compliance & Certification
The Dimension Data Managed Cloud Platform is certified to be compliant with SOC1 Type II, SOC2 Type II, ISO:22301, ISO:27001, and ISO:9001.
SAP HANA Enterprise Cloud Premium Partner
Dimension Data is one of only five global SAP HANA Enterprise Cloud Premium Partners, so it’s no surprise that the Dimension Data Managed Cloud Platform supports SAP S/4HANA in addition to other business enterprise applications and business productivity suites.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Dimension Data just announced that they are expanding their enterprise-grade Managed Cloud Platform (MCP) across Asia Pacific. Here in Malaysia, they are introducing the Dimension Data Managed Cloud Platform in partnership with the NTT Group.
The Dimension Data Managed Cloud Platform Explained
Building on the success of their global and regional Managed Cloud Platforms, Dimension Data is introducing their MCP in Malaysia, to better serve their clients and meet local demand for enterprise-grade hybrid cloud services.
The Dimension Data Managed Cloud Platform allows clients to securely deploy their applications and workloads into the cloud, while preserving data sovereignty. This platform also comes with built-in automation to help clients manage and capitalise on the benefits of multi-cloud environments.
SAP HANA Enterprise Cloud Premium Partner
Dimension Data is one of only five global SAP HANA Enterprise Cloud Premium Partners, so it’s no surprise that the Dimension Data Managed Cloud Platform supports SAP S/4HANA in addition to other business enterprise applications and business productivity suites.
Compliance & Certification
The Dimension Data Managed Cloud Platform is certified to be compliant with SOC1 Type II, SOC2 Type II, ISO:22301, ISO:27001, and ISO:9001.
Multi-Cloud Capable
Dimension Data Managed Cloud Platform is multi-cloud capable. Its Managed Services Operation Portal allows you to provision, deploy, monitor, and manage multiple instances from other cloud service providers like Microsoft Azure and Amazon Web Services.
Enterprise-Grade Security & Reliability
Partnering with the NTT Group allows the Dimension Data Managed Cloud Platform to deliver enterprise-grade security and reliability. They tout a modern data centre with robust backup and disaster recovery capabilities to ensure business continuity. This allows their clients to create a hybrid environment that is highly scalable and responsive to their business needs, without worrying about security or reliability.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Key Alliance Group (KAG) today unveiled the site of their new state-of-the-art data center that will power their Progenet Cloud service. This is part of their move to take advantage of the new Digital Free Trade Zone (DFTZ) announced by the Malaysian government.
Progenet Cloud
KAG acquired Progenet Sdn. Bhd, a boutique cloud service provider, and rebranded it as Progenet Innovations Sdn. Bhd. (PGI).
In conjunction with this acquisition, KAG began construction of a dedicated data center on the 4th floor of Menara Lien Hoe. This data center is expected to be completed in March 2018.
When completed, Progenet Cloud will be the first end-to-end cloud service in Malaysia. They wll own the real estate, the data center and the cloud service. Until then, Progenet Cloud is hosted at the AIMS and Equinix data centers.
[adrotate group=”2″]
The new data center will be a carrier-neutral data center, with 10,000 square feet per floor, supporting up to 160 racks. It will be powered by Vertiv’s SmartAisle – a row-based enclosure system that combines racks, power, cooling and infrastructure management. KAG’s other partners in creating the new data center include Kaspersky and BitGlass.
The future data center is designed for high-density web scale infrastructure, allowing PGI to offer both public and private cloud services. This facility also comes with the latest Data Center Infrastructure Management (DCIM) solution, allowing for more efficient management, better risk management and increased efficiency.
Once the new facility is operational next year, Progenet Innovations will be able to offer higher levels of service, better disaster recovery options, and new cloud computing services.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
Just before we flew to Computex 2017, we attended the AWS Masterclass on Artificial Intelligence. It offered us an in-depth look at AI concepts like machine learning, deep learning and neural networks. We also saw how Amazon Web Services (AWS) uses all that to create easy-to-use tools for developers to create their own AI applications at low cost and virtually no capital outlay.
The AWS Masterclass on Artificial Intelligence
AWS Malaysia flew in Olivier Klein, the AWS Asia Pacific Solutions Architect, to conduct the AWS Masterclass. During the two-hour session, he conveyed the ease by which the various AWS services and tools allow virtually anyone to create their own AI applications at lower cost and virtually no capital outlay.
The topic on artificial intelligence is rather wide-ranging, covering from the basic AI concepts all the way to demonstrations on how to use AWS services like Amazon Polly and Amazon Rekognition to easily and quickly create AI applications. We present to you – the complete AWS Masterclass on Artificial Intelligence!
The AWS Masterclass on AI is actually made up of 5 main topics. Here is a summary of those topics :
Topic
Duration
Remark
AWS Cloud and An Introduction to Artificial Intelligence, Machine Learning, Deep Learning
15 minutes
An overview on Amazon Web Services and the latest innovation in the data analytics, machine learning, deep learning and AI space.
The Road to Artificial Intelligence
20 minutes
Demystifying AI concepts and related terminologies, as well as the underlying technologies.
Let’s dive deeper into the concepts of machine learning, deep learning models, such as the neural networks, and how this leads to artificial intelligence.
Connecting Things and Sensing the Real World
30 minutes
As part of an AI that aligns with our physical world, we need to understand how Internet-of-Things (IoT) space helps to create natural interaction channels.
We will walk through real world examples and demonstration that include interactions with voice through Amazon Lex, Amazon Polly and the Alexa Voice Services, as well as understand visual recognitions with services such as Amazon Rekognition.
We will also bridge this with real-time data that is sensed from the physical world via AWS IoT.
Retrospective and Real-Time Data Analytics
30 minutes
Every AI must continuously “learn” and be “trained”” through past performance and feedback data. Retrospective and real-time data analytics are crucial to building intelligence model.
We will dive into some of the new trends and concepts, which our customers are using to perform fast and cost-effective analytics on AWS.
In the next two pages, we will dissect the video and share with you the key points from each segment of this AWS Masterclass.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The AWS Masterclass on AI Key Points (Part 1)
Here is an exhaustive list of key takeaway points from the AWS Masterclass on Artificial Intelligence, with their individual timestamps in the video :
Introduction To AWS Cloud
AWS has 16 regions around the world (0:51), with two or more availability zones per region (1:37), and 76 edge locations (1:56) to accelerate end connectivity to AWS services.
AWS offers 90+ cloud services (3:45), all of which use the On-Demand Model (4:38) – you pay only for what you use, whether that’s a GB of storage or transfer, or execution time for a computational process.
You don’t even need to plan for your requirements or inform AWS how much capacity you need (5:05). Just use and pay what you need.
AWS has a practice of passing their cost savings to their customers (5:59), cutting prices 61 times since 2006.
AWS keeps adding new services over the years (6:19), with over a thousand new services introduced in 2016 (7:03).
[adrotate group=”1″]
Introduction to Artificial Intelligence, Machine Learning, Deep Learning
Artificial intelligence is based on unsupervised machine learning (7:45), specifically deep learning models.
Insurance companies like AON use it for actuarial calculations (7:59), and services like Netflix use it to generate recommendations (8:04).
A lot of AI models have been built specifically around natural language understanding, and using vision to interact with customers, as well as predicting and understanding customer behaviour (9:23).
Here is a quick look at what the AWS services management console looks like (9:58).
This is how you launch 10 compute instances (virtual servers) in AWS (11:40).
The ability to access multiple instances quickly is very useful for AI training (12:40), because it gives the user access to large amounts of computational power, which can be quickly terminated (13:10).
Machine learning, or specifically artificial intelligence, is not new to Amazon.com, the parent company of AWS (14:14).
Amazon.com uses a lot of AI models (14:34) for recommendations and demand forecasting.
The visual search feature in Amazon app uses visual recognition and AI models to identify a picture you take (15:33).
Olivier introduces Amazon Go (16:07), a prototype grocery store in Seattle.
[adrotate group=”1″]
The Road to Artificial Intelligence
The first component of any artificial intelligence is the “ability to sense the real world” (18:46), connecting everything together.
Cheaper bandwidth (19:26) now allows more devices to be connected to the cloud, allowing more data to be collected for the purpose of training AI models.
Cloud computing platforms like AWS allow the storage and processing of all that sensor data in real time (19:53).
All of that information can be used in deep learning models (20:14) to create an artificial intelligence that understands, in a natural way, what we are doing, and what we want or need.
Olivier shows how machine learning can quickly solve a Rubik’s cube (20:47), which has 43 quintillion unique combinations.
You can even build a Raspberry Pi-powered machine (24:33) that can solve a Rubik’s cube puzzle in 0.9 seconds.
Some of these deep learning models are available on Amazon AI (25:11), which is a combination of different services (25:44).
Olivier shows what it means to “train a deep learning model” (28:19) using a neural network (29:15).
Deep learning is computationally-intensive (30:39), but once it derives a model that works well, the predictive aspect is not computationally-intensive (30:52).
A pre-trained AI model can be loaded into a low-powered device (31:02), allowing it to perform AI functions without requiring large amounts of bandwidth or computational power.
Olivier demonstrates the YOLO (You Only Look Once) project, which pre-trained an AI model with pictures of objects (31:58), which allows it to detect objects in any video.
The identification of objects is the baseline for autonomous driving systems (34:19), as used by Tu Simple.
Tu Simple also used a similar model to train a drone to detect and follow a person (35:28).
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
The AWS Masterclass on AI Key Points (Part 2)
Connecting Things and Sensing the Real World
Cloud services like AWS IoT (37:35) allow you to securely connect billions of IoT (Internet of Things) devices.
Olivier prefers to think of IoT as Intelligent Orchestrated Technology (37:52).
Olivier demonstrates how the combination of multiple data sources (maps, vehicle GPS, real-time weather reports) in Bangkok can be used to predict traffic as well as road conditions to create optimal routes (39:07), reducing traffic congestion by 30%.
The PetaBencana service in Jakarta uses picture recognition and IoT sensors to identify flooded roads (42:21) for better emergency response and disaster management.
Olivier demonstrates how easy it is to connect an IoT devices to the AWS IoT service (43:46), and use them to sense the environment and interact with.
Olivier shows how the capabilities of the Amazon Echo can be extended by creating an Alexa Skill using the AWS Lambda function (59:07).
Developers can create and publish Alexa Skills for sale in the Amazon marketplace (1:03:30).
Amazon Polly (1:04:10) renders life-like speech, while the Amazon Lex conversational engine (1:04:17) has natural language understanding and automatic speech recognition. Amazon Rekognition (1:04:29) performs image analysis.
Amazon Polly (1:04:50) turns text into life-like speech using deep learning to change the pitch and intonation according to the context. Olivier demonstrates Amazon Polly’s capabilities at 1:06:25.
Amazon Lex (1:11:06) is a web service that allows you to build conversational interfaces using natural language understanding (NLU) and automatic speech recognition (ASR) models like Alexa.
Amazon Lex does not just support spoken natural language understanding, it also recognisestext (1:12:09), which makes it useful for chatbots.
Olivier demonstrates that text recognition capabilities in a chatbot demo (1:13:50) of a customer applying for a credit card through Facebook.
Amazon Rekognition (1:21:37) is an image recognition and analysis service, which uses deep learning to identify objects in pictures.
Amazon Rekognition can even detect facial landmarks and sentiments (1:22:41), as well as image quality and other attributes.
You can actually try Amazon Rekognition out (1:23:24) by uploading photos at CodeFor.Cloud/image.
[adrotate group=”1″]
Retrospective and Real-Time Data Analytics
AI is a combination of 3 types of data analytics (1:28:10) – retrospective analysis and reporting + real-time processing + predictions to enable smart apps.
Cloud computing is extremely useful for machine learning (1:29:57) because it allows you to decouple storage and compute requirements for much lower costs.
Amazon Athena (1:31:56) allows you to query data stored in Amazon S3, without creating a compute instance to do it. You only pay for the TB of data that is processed by that query.
Best of all, you will get the same fast results even if your data set grows (1:32:31), because Amazon Athena will automatically parallelise your queries across your data set internally.
Olivier demonstrates (1:33:14) how Amazon Athena can be used to run queries on data stored in Amazon S3, as well as generate reports using Amazon QuickSight.
When it comes to data analytics, cloud computing allows you to quickly bring massive computing power to bear, achieving much faster results without additional cost (1:41:40).
The insurance company AON used this ability (1:42:44) to reduce an actuarial simulation that would normally take 10 days, to just 10 minutes.
Amazon Kinesis and Amazon Kinesis Analytics (1:45:10) allows the processing of real-time data.
A company called Dash is using this capability to analyse OBD data in real-time (1:47:23) to help improve fuel efficiency and predict potential breakdowns. It also notifies emergency services in case of a crash.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!
On 18 April 2017, the One World Hotel was besieged by a massive crowd. One might have thought they were there for a rock concert. They were really there for the Amazon Web Services Summit 2017. Join us at AWS Summit 2017 and find out what’s new in Amazon Web Services!
The AWS Summit 2017
With 2 keynotes and over 20 technology sessions, the AWS Summit 2017 was a great opportunity for IT managers and professionals to get updated on the latest AWS services, and what they have in the pipeline.
The highlight of the AWS Summit 2017 was a 90-minute keynote by Adrian Cockcroft, Vice President of Cloud Architecture Strategy, Amazon Web Services.
Here are some key takeaways from his presentation :
Amazon Web Services is adding new capabilities on a daily basis, with over a thousand in 2016.
Amazon will introduce Lightsail, a simple VPS service, to the Singapore AWS Region in the next few weeks.
Amazon Athena allows you to quickly query data stored in S3, whether it is compressed and/or encrypted. It will also be available in the Singapore AWS Region in the next few weeks.
Amazon Connect is a cloud-based contact center solution that is available today. It leverages Amazon Lex for natural language understanding and automatic speech recognition, and AWS Lambda for data and business intelligence.[adrotate group=”2″]
AWS also announced the Amazon Aurora PostgreSQL-Compatible Edition service, which is currently in developer preview. It promises to offer several times better performance than a typical PostgreSQL database at 1/10th of the cost.
AWS Lambda just introduced support for Node.js 6.10 and C#, AWS Serverless Application Model and Environment Variables.
The existing AWS DDOS protection has been branded as AWS Shield. It protects all web applications from volumetric and state exhaustion attacks.
The new AWS Shield Advanced service is designed to protect enterprises against more sophisticated attacks. It includes advanced notifications and cost protection, as well as WAF (Web Application Firewall) at no additional cost.
If you like our work, you can help support our work by visiting our sponsors, participating in the Tech ARP Forums, or even donating to our fund. Any help you can render is greatly appreciated!