Jump to content
Facebook Twitter Youtube

#Drennn.

Members
  • Posts

    1,725
  • Joined

  • Last visited

  • Days Won

    13
  • Country

    Palestine, State of

Everything posted by #Drennn.

  1. As kids we are told that cheats never prosper, a proverb that life seems to be doing its best to disprove. So why not get in on the grift? Nerial's latest is another card game, but it isn't your average deck-builder, though in a sense it shares something in common with this now-ubiquitous genre. You'll need to master a variety of techniques to cheat your way up to the upper echelons of 18th-century French society, even putting your life on the line in the highest-stakes games. But on a fundamental level, it's all about ensuring you've got the strongest hand. The idea behind Card Shark began to form when artist and former filmmaker Nicolai Troshinsky developed a fascination with card magic. But it only became tangible after he watched Stanley Kubrick's period drama Barry Lyndon. "There's this very memorable scene where the main character cheats with cards," he says. "And that quickly connected to what I was learning at the time about card mani[CENSORED]tion." He started researching the mechanics of card cheating in the hope of creating something that matched the sense of mischief of the scene in Kubrick's film: a playful kind of swindle. Troshinsky was confident others would find it appealing, because the process of learning the tricks proved so enjoyable to him. "It creates a kind of loop – and it's very exciting. Like, you learn a thing, you practise it, you execute under pressure. I thought I could translate that into an interactive experience. It's exactly the same process but, of course, massively simplified, so it's fun to interact with." The fantasy, in other words, without the years of practice. Any good scam requires an accomplice, and that happens to be you. Rather than sitting at the table, in an early game you're hovering by it – as a wine server, you can peek at the hands of the other players to pass information to your colleague by wiping the table in specific ways. "You have to memorise the code, you have to perform it under pressure because you can't peek for too long, then you have to signal the correct thing," Troshinsky explains. Later games introduce other methods of subterfuge: if you don't know your opponents' cards, perhaps you can cut or shuffle the deck to ensure your partner has the best possible chance of winning. New techniques are gradually introduced: with some you'll effectively accomplish similar goals through different means, while others raise the challenge by having you play among a more suspicious group. Troshinsky reckons there are "30 to 40" in total, likening them to more complex versions of WarioWare or Rhythm Heaven minigames, and as you move through the game, the sequences grow longer and more elaborate. For Troshinsky, it's just like learning magic. "What I realised is that there's this combination of skills – there's some memory, some counting, some mani[CENSORED]tion skill, doing something fast – you've got all these different techniques, so there's a variety of skills you can develop." There's strategy, too. Other players will become distrustful if you tarry while glancing over their shoulders or botch a false shuffle. But you'll be able to recover from mistakes, intentionally losing hands to avoid blowing your cover. "If you screw up midway, you can either try to rush your last few techniques to get it done and win," Troshinsky says. "Or you can sacrifice some games and try again in the next round." And should you get caught? Troshinsky says you'll be punished, but what that means in game terms is still in the balance. "Some games might result in you losing all your money, and some might result in you going in jail. Some might result in immediate death! We want you to feel the tension of the consequence of being caught. But, at the same time, we don't want people to rage quit." Reassuring to know that here, at least, there are repercussions for fraudulence.
  2. L'incidente poco fa, intorno alle ore 18. Sul posto anche i vigili del fuoco di Mondovì e Dogliani per la messa in sicurezza dei mezzi. Strada chiusa in entrambi i sensi Scontro tra un'auto e una moto poco fa, intorno alle ore 18 di oggi (martedì 4 maggio) sulla strada provinciale 9 in via autostrada. L'incidente poco prima del casello per entrare sull'A6 nei pressi del negozio Langa Pneus. Intervenuta in loco per i primi soccorsi l'equipe medico sanitario del 118 con medicalizzata. Necessario l'intervento dell'elisoccorso per trasportare il centauro (di cui al momento non si conoscono generalità nè condizioni) al nosocomio più vicino. Sul posto anche i vigili del fuoco di Mondovì con i volontari di Dogliani per la messa in sicurezza dei mezzi. Forze dell'ordine nel luogo dell'incidente per i rilievi del caso e la gestione della viabilità. La strada è stata momentaneamente bloccata in entrambe le direzioni. AGGIORNAMENTO ore 19,30: il motociclista è stato trasportato in elisoccorso al Santa Croce di Cuneo. Non si trova al momento in pericolo di vita (codice giallo). Lievi ferite per una delle persone coinvolte nel sinistro. Trasportato in codice verde con ambulanza all'ospedale più vicino per accertamenti.
  3. For most people, an Alzheimer's diagnosis would be devastating. But Dr Daniel Gibbs is not most people - he's a neurologist who not only has specialist understanding of the condition but also happens to have early-stage Alzheimer's himself. While he admits he's "disappointed" to have the disease, Gibbs says he's also "fascinated" by it - and considers himself lucky. He stumbled upon his diagnosis 10 years ago, before he developed any cognitive symptoms (Gibbs took a DNA test to trace his ancestry, which revealed genetic links to Alzheimer's). This ultimately gave him the chance to tackle it very early on. "It's easy to say I'm unlucky to have Alzheimer's," says Gibbs. "But in truth, I'm lucky to have found what I found, when I found it." As a result, the American neurologist, now 69, has devoted his life to researching the disease and what can be done to slow its progress. He's now explained his findings in a new book - A Tattoo On My Brain: A Neurologist's Personal Battle Against Alzheimer's Disease - which reveals the lifestyle choices Gibbs, and many in the dementia community, believe can help slow the progress of Alzheimer's, particularly in its early stages. And by early, he means before there are even any symptoms (there can be changes in a brain with Alzheimer's up to 20 years before there are any cognitive signs, Gibbs points out). He says he's "still doing well" but Gibbs started getting cognitive symptoms around nine years ago, when he began having problems remembering the names of colleagues, and retired soon after. He now has increasing problems with his short-term memory, often can't recall what he did an hour ago, and needs to write down all his plans and keep a meticulous calendar. Still, he insists: "Most people would have no idea I have Alzheimer's." Gibbs believes the lifestyle modifications he's made since his diagnosis have helped slow the progression of the disease, and says such lifestyle measures also appear to reduce the risk of getting Alzheimer's in the first place. "The important message is all of these modifications are likely to be most effective when started early, before there's been any cognitive impairment," he says. "The pathological changes in the brain that result in Alzheimer's disease begin years before the onset of cognitive impairment - up to 20 years for the amyloid plaques. Once nerve cells in the brain start to die off and cognitive impairment begins, lifestyle modifications seem to have less, if any, impact." Gibbs says the same is probably true for drugs, adding: "The time to intervene, both with lifestyle modifications and with potential drugs, is almost certainly early, before significant cognitive impairment has occurred." Although the Alzheimer's Society hasn't seen Gibbs' book, Dr Tim Beanland, the society's head of knowledge, agrees healthy lifestyle measures are thought to help slow the disease's progress. "There's growing evidence to suggest regular exercise, looking after your health, and keeping mentally and socially active can help reduce the progression of dementia symptoms," says Beanland. "We know that what's good for the heart is good for the brain, so a healthy diet and lifestyle, including not smoking or drinking too much alcohol, can help lower your risk of dementia, and other conditions like heart disease, stroke, diabetes and some cancers." Here, Gibbs outlines six steps he says people can take to help reduce the risk and slow the progress of Alzheimer's in its very early stages. 1. Exercise There's overwhelming evidence that regular aerobic exercise reduces the risk of Alzheimer's and slows the progression of the disease in the early stages by as much as 50%, Gibbs says. The evidence for a beneficial effect of exercise is robust except in the late stage of the disease, when it may be too late to intervene. 2. Eat a plant-based diet A plant-based, Mediterranean-style diet appears to reduce the risk of getting Alzheimer's. The evidence is most compelling for a variant of the Mediterranean diet called the MIND diet (Mediterranean intervention for neurodegenerative delay) that emphasises adding green vegetables, berries, nuts and other foods rich in flavanols. 3. Mentally stimulating activity While games and puzzles may be helpful, it's particularly important to challenge the brain with new learning, as this is thought to help develop new neuronal pathways and synapses. Examples include reading, learning to play a new musical piece, or studying a new language. 4. Social engagement This can be hard for people living with Alzheimer's because apathy is often a part of the disease. There's evidence that those who remain socially active have slower progression. 5. Getting adequate sleep This is an emerging area of research. There appears to be a cleansing of the brain of toxins, including beta-amyloid (a protein which forms sticky plaques in the brains of people with Alzheimer's) during sleep by the so-called glymphatic circulation. Also, sleep disorders including sleep apnoea are common in patients with Alzheimer's and should be treated if present. 6. Diabetes and high blood pressure treatment Both these disorders - diabetes and high blood pressure - can aggravate Alzheimer's pathology in the brain as well as lead to vascular dementia, a condition that often coexists with Alzheimer's. Therefore, detecting these issues early and ensuring they're well managed is also important. A Tattoo On My Brain: A Neurologist's Personal Battle Against Alzheimer's Disease by Daniel Gibbs with Teresa H. Barker is published by Cambridge University Press on May 6.
  4. Since the collapse of John Kerry’s peace initiative in 2014, the peace process between the Israelis and Palestinians has entered a prolonged hibernation. Both the Israeli and Palestinian political systems are preoccupied with internal discord, and the international community has either lost interest or the belief that it can make any difference in terms of ending this intractable conflict. Periodically, the idea of the formal recognition of a Palestinian state by other states or international bodies such as the UN or the EU is floated as a game-changer that will break the impasse. So far, it has been more of a trickle than a flood of countries and institutions recognizing Palestine as a state — most notably the decisions by the UN General Assembly to upgrade it to non-member observer state status and by UNESCO to admit it as a full member. However, the much-coveted recognition as a state by the UN Security Council has remained elusive, mainly due to the veto power of the US. This means Palestine remains a hybrid political entity that many countries consider a state but won’t go so far as doing the honorable thing and recognizing it as a state regardless of the Security Council. This means it does not get treated as an equal member of the community of sovereign states. In an insightful column in the pages of this newspaper, Ramzy Baroud last month highlighted the double standards of politicians who tend to support full recognition of Palestine while they are in opposition, but avoid the issue altogether when they win elections and assume power, leaving their pre-election promises to ring hollow. Hence, Baroud rightly stressed that the recent decision by the opposition Australian Labor Party (ALP) to recognize Palestine is merely an example of symbolic politics, which at best raises hopes of the policy being translated into something more substantive whenever the ALP returns to power. The power of symbolism cannot and should not be underestimated, but there is also overwhelming evidence that international recognition of Palestine would serve the causes of peace, justice and international law. For too long, the issue of recognition has been framed as a prize waiting for the Palestinians at the end of negotiations. This has always put Palestinian negotiators in an inferior position around the negotiation table vis-a-vis Israel, which is not only a superior military and economic force that is occupying its land, but one that is formally a state. Laying to rest the question, and the whip, of Palestinian self-determination would accelerate the peace negotiations and give them a better chance of succeeding. As long as those in a position of power in Israel still toy with the idea that postponing or procrastinating over Palestinian statehood can delay or even kill off the possibility of Palestinian sovereignty, it remains an incentive for them to never enter into genuine peace negotiations, let alone conclude them with an agreement. After all, the already-existing asymmetry between Israel and the Palestinians was enshrined in the Oslo Accords, where the Palestinian Liberation Organization (PLO) reaffirmed its recognition of Israel’s right to exist and, in turn, Israel recognized the PLO as the sole representative of the Palestinian people, but not Palestinians’ right to self-determination. Recognizing a people’s right to self-determination is not some kind of prize that one country bestows on another out of generosity, but is a founding principle of the UN Charter, as an important pillar of developing friendly relations among nations and for peace to prevail. The UN might not be able to enforce an end to Israel’s occupation of the West Bank and its blockade of Gaza, but it can and should do the decent thing in line with its own articles of faith and recognize Palestine. Moreover, it was UN Resolution 181 that determined that “Independent Arab and Jewish States… shall come into existence in Palestine two months after the evacuation of the armed forces of the mandatory Power has been completed but, in any case, not later than 1 October 1948.” Surely, more than 70 years after this resolution was passed by a large majority, it is a complete anomaly that only half of it has been fulfilled, regardless of the origins of this failure. UN Security Council recognition will do no more than reaffirm what was supported by a majority of member states back in 1947. Furthermore, the international community, especially the US and the EU, has invested immense political capital, not to mention vast amounts of financial and security aid and support, in many areas of state-building for the Palestinians. The failure to bring this huge effort to its just and logical conclusion should weigh heavy on the shoulders of the international community. Its current reluctance to become involved with the Israeli-Palestinian conflict, and especially to resolving it, should not be compounded by a change of heart over the importance of the Palestinians’ right to justice and self-determination. The international community’s apathy is more a case of despair and a mistaken belief that it is powerless to change the situation. Recognition of Palestine as a state would be a relatively easy move for the EU, for instance, and not that costly, especially with a more sympathetic US administration for the next few years, even if Washington, considering its own domestic constraints, is not capable of following suit. However, it would create a new dynamic within the international community, even if issues such as final borders, Jerusalem and a fair and just solution for the refugees remain unresolved for the time being. These issues would, in any case, be easier to negotiate after Palestinian statehood is normalized. Lastly, Tel Aviv has tried for years to frame the so-called “unilateral recognition” of Palestine as an anti-Israel act. This is complete and utter nonsense and should be seen by the international community as no more than another PR exercise in guilt-tripping those who support dozens of UN resolutions to this effect that reflect an international consensus. Recognizing Palestine reaffirms the undeniable and inalienable rights of a nation, in the very same way that led the Zionist movement to establish the state of Israel. To recognize Palestine is to oppose the occupation, not to compromise the right of Israel to exist in peace and security. The onus now is on the conscience and wisdom of the international community. • Yossi Mekelberg is professor of international relations and an associate fellow of the MENA Program at Chatham House. He is a regular contributor to the international written and electronic media.
  5. Software: you need it. You can't achieve your business goals without it, and you have coders on staff. You have project managers. However, that salesperson from that big software company is telling you to just buy their product—it does seem awfully attractive. Should you do it? Here are five reasons to buy, not build, software. SEE: Software as a Service (SaaS): A cheat sheet (free PDF) (TechRepublic) Focus. You don't have to make all the calls. When you buy, you can concentrate on your business without having to spend time and resources on all the maintenance and bug fixes—that's what you pay your vendor to take care of for you. Cost. The vendor has an economy of scale spread out across all of its clients and you get the benefit of that. You don't have to shoulder all the costs, especially up front. Time. When you buy software, you spend time learning it and implementing it, but that's it. You don't have to spend time waiting for it to be made. Updates. New features you need may arrive before you even knew you needed them. Since vendors are trying to keep all the clients happy, you benefit from the wisdom of their crowd of customers. You're always staffed. You never have to worry about losing support and training the next person to maintain the software. The vendor will always have staff ready to help. If you prioritize efficiency, convenience and the need for unending support, buying may be right for you. Don't just take this list's word for it. Do you value control and customization? We have another list of the top five reasons to build software that you may want to compare this to. Subscribe to TechRepublic Top 5 on YouTube for all the latest tech advice for business pros from Also see Top 5 ways to protect against cryptocurrency scams (TechRepublic) Top 5 ways to handle Zoom fatigue (TechRepublic) How to become a software engineer: A cheat sheet (TechRepublic) Zoom vs. Microsoft Teams, Google Meet, Cisco WebEx and Skype: Choosing the right video-conferencing apps for you (free PDF) (TechRepublic) Hiring Kit: Application engineer (TechRepublic Premium) Microsoft 365 (formerly Office 365) for business: Everything you need to know (ZDNet) Tom Merritt's Top 5 series (TechRepublic on Flipboard)
  6. NEW DELHI : Nineteen companies have applied to avail the benefits of the government’s production-linked incentive (PLI) scheme for manufacturing IT hardware in India, the Centre said on Tuesday. The scheme was announced in March and is similar to the one announced last year for the local manufacture of mobile phones. The applicants for IT hardware production include 15 domestic companies and four global firms. Along with multinationals Dell, Wistron, Rising Stars Hi Tech (a Foxconn subsidiary) and Flextronics, domestic manufacturers such as Lava, Micromax, Dixon Technologies, Infopower, Syrma Technologies, Neolync Electronics, Optiemus Infracom, Netweb Technologies, Smile Electronics, Panache Digilife and RDP Workstations have expressed their interest to manufacture IT hardware components such as laptops, tablets, all-in-one personal computers (AIOs) and servers. The PLI scheme for IT hardware offers an incentive of 2-4% on net incremental sales of goods over the base year of 2019-20. The benefits will be available till 2024-25, starting 1 April 2021. According to government estimates, companies eligible for the scheme will be producing IT hardware worth ₹1.6 trillion over the next four years. Out of this, exports will account for 37% or ₹60,000 crore. The government said that the scheme will facilitate investments worth ₹2,350 crore in India’s electronics manufacturing space and generate 37,500 direct employment opportunities. India’s domestic value addition in IT hardware manufacturing is expected to grow from 5-12% to 16-35% over the next four years, it added. “The world is looking at India as a destination to manufacture, and participation in the scheme by global companies is a resounding vote of confidence to the current government’s policies," said Pankaj Mohindroo, chairman of the India Cellular and Electronics Association (ICEA), the apex industry body for homegrown electronics companies. While the industry welcomed the move, global supply of chips and other components have affected the ability of local manufacturers to meet the target set under the scheme. The ICEA had written to the government in March, requesting to revise the base year for paying PLI benefits to 2020-21 instead of 2019-20. The industry body said that 15 out of the 16 PLI applicants for producing mobile phones will not be able to meet the targets required under the scheme. Unlike the PLI scheme for IT hardware, the incentives for the mobile manufacturing scheme were to be paid out from 2020. According to an industry executive, the matter is currently under review at the ministry of electronics and information technology (MeitY), and the government hasn’t said no to reviewing the base year yet. According to a report from Counterpoint Research, the global chip shortage may extend further since Taiwan, which accounts for nearly 70% of all chip manufacturing, is facing its worst drought in more than 56 years. “Taiwan dominates semiconductor production globally due to its unique position in the foundry and outsourced assembly and testing (OSAT) industry. Semiconductor mass production also uses some of the most advanced technologies, which make setting up a production unit a high-investment and time-consuming affair. This is why the current drought in Taiwan has set alarm bells ringing the world over," wrote Brady Wang, semiconductor analyst at Counterpoint Research, in a blog post.
  7. Five years ago, the Battlefield Easter Egg Community believed they had stumbled upon something big. After discovering morse code hidden within the Battlefield 1 MCOM radios, a dedicated myth-hunting team went about attempting to decipher it, but struggled to make head or tail of the message inside. "We had one last desperate hope," explains HotelMama, a leading figure within the Battlefield Easter Egg [BFEE] network. "The game's MCOMs are actually designed after [Italian inventor Guglielmo] Marconi's telegram machines, and because he also invented a code to go with it, we wanted to get our hands on a code book. There were only super rare hard copies available, so we bought one." Unfortunately, before the pricey purchase had even made its way into the community's hands, DICE tweaked the morse code in an unexpected patch, and this time the translation was disappointingly clear: "Message corrupted, await further instructions". "This basically meant that there was no functional Easter egg, and we had bought a super expensive code book just for the heck of it," HotelMama continues. "To not waste it totally, we decided to organise a world tour, sending the book not only to both DICE studios in Los Angeles and Sweden to be signed by the developers, but to select members of the community which had chimed in on the costs. We call it the Marconi Bible, but it's basically lore that never actually helped us in solving anything!" Shark tale Established in 2016, the Battlefield Easter Egg Community is now over 40,000 members strong, committed to solving and uncovering the series' ever growing number of secrets and ARGs. But while Battlefield has a long and storied history of hiding various Easter eggs within its multiplayer maps, it's the megalodons – giant, prehistoric sharks who roamed the oceans millions of years ago – that have offered the most visually memorable surprises of all. People who've never even played Battlefield will likely have heard of the megalodons, showing up across multiple games since their first appearance in Battlefield 4, and making headlines every time a new sighting is discovered. There's a reason the Jurassic shark made the cut as the inspiration for the BFEE's official emblem, after all. "They're a recurring theme, and basically a mascot to DICE as much as us hunters at this point," says HotelMama. Despite their gargantuan size, the megalodons have been notoriously difficult to summon, involving a multi-step process that can only be undertaken by groups of players working together in a private server. This is after the community has figured out how to summon the shark in the first place, of course, which itself takes a lot of collective deduction, time-intensive experimentation, and forensic scouring of the teasers that DICE's development team put out on social media. "The first step is searching for anything out of the ordinary," explains HotelMama of the hunt. "This is especially the case with game updates, as anything that has changed is immediately suspect. Once someone has found a point of interest, we start throwing everything at it (literally and figuratively) until something goes click or boom. Usually, Easter eggs have a very strong 'this is it' feel to them: once you see it, it's undeniably on purpose and not just 'those four pixels look like an arrow'. Once you can predict how an Easter egg behaves, you've cracked its mechanics. Even these are to be taken with a grain of salt, though, as some are simply bound to random generation." If this exhaustive process sounds like a slog, BFEE member KopaTroopa123 is keen to stress the contrary: "The experience of being involved in the hunt is a huge rush! Being there and finding pieces of the puzzle or putting them together is a genuine build of excitement and adrenaline. Figuring it out can get mundane, but when you're chatting and joking with friends, it isn't boring – you're just spending time hanging out with your buddies!" "It's not just about the end result of finding the Easter egg, but the journey along the way," adds BFEE hunter PergySkeel. "The laughter and jokes shared, the frustration and the banter, the theories over the wild ideas of what the Easter egg could be or how to solve it, [and] the thrill of the chase. I've made many friends along the way, too. We send each other holiday cards and birthday gifts, use our real names, send text messages, and share about our families. Gaming in general is about bringing people together, and when you add in an additional passion for solving morse code, puzzles, chasing crazy Dan Brown-type conspiracy theories, you get bonded to one another." Fish out of water Indeed, despite the last Battlefield game releasing over two and a half years ago, the Battlefield Easter Egg Community is as active as ever, whether sharing stories and new discoveries on Discord, jumping into a server together to follow the latest leads, or just reactivating fan-favourite Easter eggs, helping newcomers obtain some of the exclusive rewards that come from doing so, such as special outfits or dog tags. Dice continues to engage positively with BFEE, too, to the point where the community's achievements are now immortalised in some of the best Battlefield games with their very own Easter eggs inspired by its members. With Battlefield 6 set to launch later this year, the community is now anticipating a busy Holiday season, as DICE's next-gen title will no doubt come packed with a veritable basket of new Easter eggs to find. Whether that basket includes a megalodon is anyone's guess, but given the shark's fondness for the oceans, puddles, and rivers of DICE's multiplayer battlefields, it's probably best to keep both eyes open when going for a dip in the upcoming game's digital waters. "One of the best parts of Easter egg hunting is simply seeing what DICE comes up with next," says HotelMama. "I'm excited to just wait and see what happens!" For more, check out the best RPG games to play right now, or watch our latest preview of Resident Evil 8 below.
  8. happy birthday Qween ❤️ 
    And your days happy always
    1. Qween

      Qween

      Thank you ❤️

  9. 3 maggio 2021 - Senza posteggio privato e colonnina a disposizione, l’auto con la spina viene accantonata da circa il 20% dei “pionieri” californiani, che hanno situazione più consona rispetto all’Italia. Nessuna ibrida con la spina soddisfa pienamente tutte le attese create, a oggi. I risultati dell’ultimo studio americano negano un futuro con diffusione tanto facile e veloce, per le auto elettriche o plugin li amanti dei motori “veri” che auspicano diffusione degli eco-carburanti a discapito delle auto elettriche, applaudono all’ultimo studio divulgato dall’ITS della University of California. Il dato forte, alla faccia di chi palesa elettriche ed ibride come il meglio di oggi per molti usi, è che addirittura il 20% dei pionieri ben motivati per le auto con la spina, le deve abbandonare. Uno studio molto dettagliato di cui varrebbe la pena approfondire ogni aspetto, prima di sentenziare. Il dato di fatto però è che pur tra soggetti con caratteristiche compatibili all’auto elettrica o ibrida, pur se in California e non in una isola del Sud Italia, uno su cinque deve abbandonarla. Sbagliano le Case e i concessionari, a venderle? Le previsioni dovranno essere drasticamente ridimensionate, ancor più nel Bel Paese che vanta limiti maggiori? In USA non incolpano nessuno, non drammatizzano ma anzi suggeriscono spunti che i politici non dovrebbero trascurare. Limitandosi a spiegare il fenomeno che però potrebbe in effetti essere figlio di acquisti sbagliati pur negli anni dove la comunicazione e il marketing dovrebbero essere validissimi. Famiglie propense, ma le ricariche.. Su oltre 4.000 famiglie americane, proprietarie di auto elettriche o ibride con la spina in tempi per loro meno pioneristici che in Italia (2012-2018) circa il 20% ha dovuto “tornare” su veicoli benzina e diesel. Tre i motivi principali non le prestazioni scadenti in assoluto, le manutenzioni o i consumi diversi dal dichiarato, ma le ricariche più ostiche del previsto. Quelle che pure dovrebbero essere ben spiegate, a un acquirente. Anche in California ci sono difficoltà nel trovare una colonnina pubblica o privata disponibile. Specie nei casi dove il possessore di BEV o PHEV non ha spazio privato dove allestire almeno una Wallbox. Specie quando alcuni spazi dedicati alle ricariche ci sono ma usati non correttamente, o come previsto e quindi occupati più del necessario per dare il servizio davvero a tutti. La curiosità di vedere dati familiari, di abitazione, reddito o cultura dei pionieri BEV e PHEV americani, non offre grandi evidenze inattese. Sono persone che mediamente “possono” permettersi una elettrica o ibrida, qualcuno la aveva comprata come seconda vettura, qualcuno la noleggiava. In ogni caso il 20% circa ha sbagliato scelta, abbastanza trasversalmente rispetto alle caratteristiche. Pur apprezzando l’auto, tranne che in alcuni casi (tanto per i BEV quanto per i PHEV, anzi i secondi sono ancora più lontani dagli obiettivi auspicati). L’analisi californiana di chi prova l'auto con spina e torna indietro al serbatoio, boccia soprattutto il marchio Fiat e molto meno Tesla, percentualmente. Vari e palesi i motivi, anche perché si trattava, per Fiat, della prima 500e con vari limiti rispetto alla nuova, venduta in Italia, mentre Tesla vanta meggiori autonomie.
  10. Brands have been highly affected by the current COVID-19 pandemic. Physical stores being forced to close, companies closing stores due to declining revenue as well as distribution and production difficulties affecting stock inventory. Furthermore, we have seen a fast digitalization of the entire industry with digital fashion fairs popping up all over the globe, digital fashion shows in exchange of the normal catwalks and the entire ecosystem in need of digital solutions to manage the new reality. It is no secret that those companies that understand and utilize effective digital solutions will be the winners of the future within their space. Fashion companies are no-longer just a company working with fashion, they also need to have an effective digital backbone to manage all their processes and be able to sell products both B2C and B2B. Think inventory management, purchasing, fulfillment, reporting and business automation. An Impressive Virtual Showroom Solution The post-pandemic reality has impacted companies to rewire their operating models to enable further flexibility, faster decision-making and of course an effective digital setup. Traede, a fashion- and lifestyle platform, has tailored their digital products to meet exactly these growing needs for brands all over the world. For showrooms and lookbooks Traede have created a product called Virtual Showroom, which enables companies to showcase products with an easy-to-use and impressive online virtual showroom platform. Critical features such as lookbook, linesheet/ordersheet, image and video bank, product info and taggability are all integrated. Best In Class Digital B2B Wholesale Webshop A company selling their products B2C and delivering a nice digital experience should also be able to deliver a nice digital experience to their B2B webshop customers. We all know it. You are looking for the top selling newest products or stacking up on your existing stock inventory. It is either calling sales agents, filling out horrible excel spreadsheets or visiting confusing and very oldschool online ordering platforms. Traede has changed all this by creating an easy-to-get started as well as intuitive and fully customizable B2B webshop. This way companies are now able to deliver a seamless and smooth experience when going into the B2B wholesale universe. With Traede you can now give your agents and sales reps strong digital sales tools and never miss an up-sell and new sales opportunities. ‘...Traede is the fastest order placement system our sales reps have ever used...’ -Simon Aaby, Han Kjøbenhavn. All the Tools You Need in One Unified Platform If an impressive virtual showroom or an effective B2B webshop is not enough for the digitization of your business then Traede might have the perfect solution for you. Let us introduce the All-in-One solution - delivering all the tools you’ll ever need in one unified platform. Be able to manage your virtual showrooms, your B2B webshop as well as your entire backbone consisting of areas such as sales management with both B2C, B2B and EDI orders in one place. Manage your purchasing and production on real-time data instead of gut feeling. If getting full insight into what product and where it is in, what has been sold and to whom and hence what needs to be purchased sounds too good to be true, then think again. With Traede All-In-One Inventory Management has never been easier. The platform utilizes real-time data so setting up custom-reporting is intuitive and makes you take decisions based on data. Lastly but not least you can optimise your business by integration to other applications such as 3PL, B2C shops, accounting and payment providers, marketplaces and much more.
  11. China’s steel industry is blaming the concentrated ownership of Australia’s iron ore mines for the soaring ore price and is calling for Chinese government intervention. ‘We believe that the supply side is highly concentrated and the market mechanism is not working, so we call for the authorities to play a bigger role in the event of market failure,’ Luo Tiejun, vice president of the China Iron and Steel Association, told an industry conference last week. The reality is that the market is working well and that the Chinese authorities have much less power to influence it than Luo might imagine. Astronomical iron ore prices reflect the inefficiency of China’s own iron ore mines rather than any alleged monopolistic behaviour by the major Australian mining companies. It must be galling to Chinese authorities that, notwithstanding their determination to punish Australia for its many perceived sins, their annual imports from Australia are running at near record levels and appear likely to surpass the $150 billion peak reached in the first half of last year. The iron ore price has been nudging close to a record US$200 a tonne, more than double the price of a year ago and three times the price expected by the Australian government when it compiled last year’s budget. The price is delivering fabulous profits to Australian mines and is also boosting Australian government tax revenues. The cost of extracting iron ore is not much more than US$16 a tonne for BHP and Rio Tinto. In any commodity market, the price is dictated by the highest-cost or marginal supplier. After making allowance for quality and transport, all suppliers of a commodity get the same price in an open market, regardless of what it costs to produce. So, the highest-cost producer is the one that would stop mining if the price were to fall, leaving the market short of supply. In the iron ore market, the highest-cost producers are all Chinese. China normally takes around 70% of the seaborne trade in iron ore, or around 1 billion tonnes, but it relies on domestic production for a further 900 million tonnes. However, around three-quarters of China’s domestic production needs a price of at least US$100 a tonne in order to operate, with some mines having much higher break-even thresholds than that. China’s iron ore reserves are low quality and require expensive heat treatment before they can be fed into steel mills. Production peaked at 1.5 billion tonnes in 2015, but has fallen because of the sector’s poor economics and government efforts to stop shallow strip mining, which is environmentally damaging. While the high operating costs of China’s most marginal mines set a pricing floor, the market has been swept along by demand fuelled by government stimulus spending, both in China and across the world, in response to the Covid-19 pandemic, which has fired steel-hungry construction and consumer goods industries. Chinese authorities had set an objective of reducing steel production in line with their goal of reducing carbon emissions and particle pollution. In the heart of China’s steel district in Tangshang, 150 kilometres east of Beijing, mills have complied with orders to lower production, but that has caused a panic among steel buyers and sent the steel price soaring. Steel mills across the rest of China have been ramping up production in order to take advantage of the strong price. China’s total steel production is running at record levels. The failure of the central authorities’ efforts to shut down inefficient and high-polluting steel mills has been chronic since the price boom of 2009–10 and partly reflects the fact that local steel mills are more responsive to provincial authorities, for whom they are an important source of revenue, than to Beijing. Historically, the iron ore price tracks the steel price, so the current price boom at least partly reflects the tensions in government policy to control air quality and carbon emissions. Global iron ore supply hasn’t been especially weak, but it has not responded to the increase in demand both from China and from the rest of the world. Australia has been supplying about 60% of world iron ore trade, with exports forecast to reach 900 million tonnes this year. China has been buying up what it can from wherever it can get it, including a doubling of its purchases from India. However, at around 30 million tonnes, India is only a marginal supplier. China’s great hope is that Africa will provide some relief from its dependence on Australia. It is looking at projects in Algeria, Congo and Guinea, with the last the most advanced. Analysts expect the Simandou project in Guinea, in which Rio Tinto has a stake alongside Chinese state-owned resources group Chinalco, will proceed, with a total cost of around US$20 billion. It is a large and high-grade orebody, although the infrastructure challenge of getting the ore to port is formidable. China’s Global Times last week declared that ‘the exploitation of the Simandou mine in Guinea, with the participation of a Chinese company, is expected to help cut the heavy reliance on Australia for imports’. However, the project, which has been stalled for more than a decade amid corruption claims and conflicts between the government and partners, would take several years to complete, and production in the initial phase is expected to be in the region of 100 million tonnes a year, or about two-thirds the output of Australia’s Fortescue Metals Group. It’s not a big enough project to materially transform the global iron ore market or to free China from its dependence on Australia. The steel association’s demand that the authorities do something about the surging iron ore price mirrors their response to the iron ore price spike in 2008–09, which was generated by Chinese government stimulus spending in response to the global financial crisis. Then, as now, they blamed Australian producers. The Chinese government was thwarted by the Rudd government in its attempt to have Chinalco take over Rio Tinto, which it thought would give it the ability to keep a lid on iron ore prices. At some stage, China’s economy will move beyond its heavy reliance on construction as the source of its growth and become more focused on services. That might come as a result of a maturing of its economy or it could be sparked by a debt crisis. A serious cut in China’s demand for steel and iron ore could see prices plummet to something much closer to the marginal cost of Australian iron ore mines, but such a result wouldn’t come from the interventions demanded by the China Iron and Steel Association.
  12. The outbreak of the Covid-19 pandemic last year – and the remote working environment it rapidly created – are continuing to drive strong demand for software to be more accessible for both consumers and businesses. From the way we use technology to how we carry out our work, the limits on travel and social distancing guidelines have led to a constantly changing landscape with a need to consume services in a different way. Amid this business uncertainty, there has been a significant shift among organizations to invest in operations and switch to sell their solutions online. While many firms are familiar with the transition to subscription licensing and are looking to boost revenue from new delivery models and packaging of functionality, for others this will be a dramatic shift from how they have operated in the past. This is why 2021 is set to be a significant year for software licensing and an area that businesses need to understand in order to defend and grow their revenue. Evaluating and optimising their software licensing approach can allow businesses to create new and innovative revenue streams, increase operational efficiency and generate customer insights – which in turn will help organizations to understand the demands of their consumers and develop products with their customers in mind. In addition, amid the current climate and financial turbulence, this can help provide reliable and predictable revenue streams. So, what trends can we expect to see in 2021, and what should be the key focus for companies looking to enhance their software licensing? Get to know this year’s hottest software licensing models Battling customer churn With subscription licensing, continual customer engagement becomes key. The goal is no longer about just winning the customer, it’s now about an ongoing relationship with that customer to provide value to keep them engaged. Businesses must continue to deliver benefits for their consumers, or they risk losing them to a competitor – and producing a successful subscription licensing program starts with a vendor’s company structure and culture. Pre-pandemic, we saw more hyper focused businesses with a traditional organizational structure – where customer support, supply chain and IT were all very singular functions. In 2021, we will begin to see more organizations align with how customers use their products. Through data analytics, this shift in direction, from product-focused to customer-centric, will enable businesses to understand where to invest in the future, focusing on areas customers are actually getting value from. It also means businesses can go back and continually delight the customer with new products and services, keeping that regular interaction alive and providing an opportunity to upsell that is harder to achieve from one-time transactions. Shifts in flexible business models This chance to upsell leads into the next change we’ll see for businesses in 2021. Traditionally, organizations with a broad product portfolio would bundle their goods together in an ‘all you can eat’ buffet style licensing agreement. Now vendors can offer their customers bundles of product features underpinned with the license model that create value to match market sector and geography needs. In 2021, more business will look to adopt subscription models to match the end customer demands for greater choice and flexibility. Customer demands and expectations are changing, and many want to avoid upfront investment and be able to easily scale up or down. As such, this shift means that software businesses will have to change too. One of the other big trends in buying models – especially when trying to acquire new customers – will be the emergence of subscriptions offered in tiers of capabilities. From trial to premium models, this has the potential to lower the cost barrier, allowing more customers to enter at lower cost, experience the benefits, and then, once they derive value out of it, upgrade to a higher offer with more features. This will also enable companies to upsell these new features and grow their revenue base. Even for customers who aren’t able to upgrade, or for those who chose not to, lower cost barriers mean that those who were previously priced out of the market are able to buy into entry-level offerings. This platform ensures that vendors can capture more of the market. Not only do flexible business models allow vendors to tailor their products more closely to individual customer needs, but they also provide opportunities to regularly communicate with consumers. For those with a smaller investment, it is possible to maintain their engagement and growth with incremental changes, each time a licensing agreement is up for renewal. How security & license compliance cultures can coexist for open-source software management M&As leading the pack Finally, as businesses continue to face tough decisions in order to survive in a post-Covid-19 landscape, merger and acquisitions (M&A) will spike in 2021 as some sectors struggle to keep afloat and larger ones look to solidify their position by acquiring others. However, the post-M&A integration of organizations can be complex – especially during the pandemic when businesses need to move quickly. Advertisement In order to come out of this emerging trend on top, businesses must ensure there is a central approach towards policies and procedures around software licensing. This includes a consistent approach to software delivery, entitlement and licensing that results in fast on-boarding of products from acquired businesses to the main organization. In doing so, all components will come together, boosting business efficiency and scalability capabilities. So, as we move through 2021, and businesses accelerate their digital transformation to compete in this increasingly connected world, just adopting software is no longer enough. Successful software organizations will be those that recognize the emerging trends and the increasing importance of software licensing subscription services as an integral part in growing businesses’ revenue. Those that act now to improve customer engagement and ensure that their business models offer greater flexibility, will be the ones that succeed.
  13. Gungnir of Norway er en start-up som utvikler og produserer høykvalitets treningsutstyr, med stort internasjonalt fokus. Selskapets første produkt er en Olympisk vektstang med innebygget lås for sikring av vektskiver. Designet er patentert med prioritet i de største markedene i samtlige verdensdeler og selskapet har allerede markert seg internasjonalt. Vår nye CEO entrer selskapet i en vekstperiode der internasjonal kommersialisering står sentralt. Denne rollens hovedoppgave blir å lede selskapet videre gjennom denne globale veksten. Det ønskes at du har solid økonomisk og finansiell kompetanse, da dette vil være en av dine ansvarsområder. Samtidig er du nødt til å kunne lede selskapet på strategisk nivå, og planlegge med godt overblikk for videre vekst. Vi tilbyr en unik mulighet til å ta del i en spennende vekst og utviklingsfase. Topplederrollen vil være synlig og du vil ta del i et team bestående av totalt seks personer i et arbeidsmiljø som er drevet av høye ambisjoner og dedikasjon. Fire av seks i teamet er en del av ledergruppen, hvor ansvarsområdene dekker følgende felt; forretningsutvikling, produktdesign/utvikling, intern og ekstern produksjon, innkjøp, bedriftssalg, netthandel, markedsføring mm. Dine arbeidsoppgaver Overordnet økonomi- og finansstyring for selskapet Lede og planlegge for en spennende og raskt voksende internasjonal fase. Etablere og opprettholde en sterk selskapsstruktur med hensikt i å ivareta ansatte og selskapets ambisjoner om utvikling, vekst og skalering Etablere smidige arbeidsprosesser som gir rom for videre ekspansjon av produktportefølje og sikrer selskapets struktur og lønnsomhet Ha mulighet til å stille opp til dagligdagse oppgaver derom det er behov. Kvalifikasjoner Høyere utdanning innen økonomi, forretningsutvikling eller annen relevant retning Relevant ledererfaring Erfaring fra selskap eller organisasjon i vekst- og kommersialiseringsfaser er ønskelig Internasjonal forretningserfaring anses som fordelaktig Bakgrunn fra arbeid innenfor økonomi/finans, forretningsutvikling, prosjektledelse eller strategiprosesser anses som fordelaktig Mulighet til å reise utenlands i først og fremst kommersielle sammenhenger Flytende i norsk og engelsk, skriftlig og muntlig Erfaring fra ansettelsesprosesser anses fordelaktig Personlige egenskaper Samlende og motiverende med fokus på å skape et inspirerende arbeidsmiljø og en robust organisasjon rustet for vekst Helhetstenkende og strategisk, kombinert med operativ beslutningsdyktighet, handlekraft og gjennomføringsevne Samarbeidsorientert, tilstedeværende og trygg i møte med ansatte, partnere, kunder og eiere Forretningsfokusert, analytisk og kritisk kartleggende i prioritering av forretningsmuligheter og prosjekter Planorientert og hardtarbeidende med evne til å skape struktur og forutsigbarhet i en fase hvor krevende vekstmål skal realiseres Utadvendt og imøtekommende uten redsel for å stikke hodet frem. Interesse og erfaring med styrketrening og/eller annen fysisk aktivitet Vi tilbyr Eierskapsprogram med muligheter for å ta del i reisen og selskapets finansielle utvikling Lønn etter avtale En topplederrolle med ansvar for det økonomiske og kommersialisering av et svært spennende hardware-selskap, som utvikler og produserer et patent med stort internasjonalt potensiale. En mulighet for å utfolde seg og ha en sterk innflytelse på den korte- og langsiktige utviklingen av selskapet Å jobbe i et selskap som vokser stadig fortere og kan vise til positiv kommersiell utvikling fra 2020 Et kreativt, åpent og løssluppent miljø drevet av høy motivasjon og dedikasjon En fleksibel og autonom arbeidsplass
  14. Since Among Us exploded in po[CENSORED]rity last year, other development studios have attempted to capitalize on the emergence of social deduction as a mainstream gaming genre. Some social deduction video games did exist before Among Us, but they were mostly confined to their own PC niche, with the only titles that broke through to any semblance of mainstream audience being Town of Salem and Garry’s Mod's Trouble in Terrorist Town game mode. The main benefit that InnerSloth brought to the genre is the streamlining of core gameplay mechanics and objectives. Both Town of Salem and Trouble in Terrorist Town can get bogged down in convoluted rules and player abilities, whereas Among Us is relatively simple. This likely helped Among Us become po[CENSORED]r on streaming platforms, since its easy-to-understand gameplay makes it especially watchable. Due to the relatively recent revival of the genre and the platform's more efficient development and distribution process, most Among Us-like games are found on PC. The following examples are all available on Steam. It's important to note that, while the similarities between these titles and Among Us are sometimes quite stark, they aren't necessarily copies, reproductions, or clones of InnerSloth's game. Each has its own unique mechanics, and many likely began development before Among Us rose to po[CENSORED]rity. Other Games Like Among Us On Steam Unfortunate Spacemen Unfortunate Spacemen, a free-to-play game released in June 2020, is very similar to Among Us in many ways, but it has a first-person perspective. It has a few other distinguishing features, such as proximity voice chat and A.I.-controlled space monsters that attack players, and it's a good option for those who enjoy FPS games. Agrou Agrou takes the easy-to-understand, Impostor-versus-innocents concept of Among Us and blends it with the purely communication-based gameplay found in Town of Salem. It takes place in a medieval-esque setting and follows the premise of traditional deduction games like Werewolf and One Night Ultimate Werewolf, with players all sitting around a campfire and attempting to find the werewolf hidden among them. First Class Trouble Two more recent games have attempted to shake up the genre and expand upon the foundation created by Among Us. The first, First Class Trouble, features a group of players trying to escape a doomed luxury airship, with robots disguised as humans seeking to stop them. The main change this game brings to the formula is requiring the help of an innocent group member to kill other players, whereas imposters are usually able to kill on their own in other games. Dread Hunger Second is Dread Hunger, perhaps the most exciting addition to the social deduction genre, as it adds many new elements and layers to the established gameplay loop. On a sea ship in the arctic, players attempt to sail to safety and survive both harsh conditions and the evil crewmen hidden amongst them. What makes Dread Hunger unique is its unforgiving survival elements, such as hunger, thirst, and warmth. The social deduction genre has seen tremendous growth thanks to Among Us, and the steady stream of new games looks promising for the future. The only factor holding the genre back is its lack of a presence on consoles, but given its current po[CENSORED]rity, that problem is almost surely only temporary.
  15. CARRODANO - Un giovane motociclista ha perso la vita sul Passo del Bracco, in località Mattarana a Carrodano sull'Aurelia, schiantandosi contro un'auto che procedeva in senso opposto. L'arrivo dei soccorsi immediati sono stati inutili per Efrem Marconi, 39 anni, di Deiva Marina, che è entrato in arresto cardiaco e nonostante le manovre di rianimazione è deceduto. La notizia si è sparsa in un attimo nella frazione di Piazza dove il giovane viveva con i genitori. Lavorava a La Spezia a bordo dei rimorchiatori ed era un grande appassionato di moto. Oggi questo suo amore gli è costata la vita in un tratto del Passo del Bracco, già teatro di gravi incidenti in passato. Centinaia di messaggi sui social sottolineano l'affetto verso Efrem Marconi e una giornata di silenzio sui social in segno di lutto da parte dell'Associazione benefica 'Bracco in sella' della quale era socio.
  16. Price: $720,000 Location: Manjimup Area: 2.93ha Agent: Nutrien Harcourts WA Contact: Don Lyster 0427 778 116 LIVING the country lifestyle is no longer a dream when you look at this property recently listed by Nutrien Harcourts WA. Within a five minute drive of all the facilities in the regional centre of Manjimup with its retail, medical and educational services and sporting opportunities, is this special lifestyle property. With plenty of room for the larger family or a change for active retirees, you could not help but be impressed by what is on offer. The four-bedroom, two-bathroom home is a steel-framed brick veneer and zincalume dwelling with impressive living areas. Together with an open-plan kitchen, dining and family rooms, there is an impressive fully enclosed entertaining area, games room and storeroom with heating. The home has solid fuel heating, air-conditioning, solar hot water with fire and electric boosters, reticulated gardens, a greenhouse, lawns and a house orchard. For the active handy person there is a top quality 24 metre x 9m x 4m high Colorbond shed with a 6m x 9m lean-to. There are four roller doors with a lockable personal access door that provides security. The shed space is divided by two interior walls to facilitate workshop, machinery or caravan storage. Power is via two-phase but can be upgraded to three-phase and the space has pot belly heating. Water supplies are mainly rainwater, with a 45 kilolitre concrete tank and surplus is piped to the fully lined dam which has a capacity of about 1600 kilolitres. Supplemented supplies are from the equipped bore that delivers 2250 litres per hour. Also included on the property are about 450 camelia shrubs (Japonica and Sasanqua) which are used to supply florists with foliage. The block is fenced with Ringlock and barb on wood posts and is in good condition.
  17. Before Palestinian President Mahmoud Abbas postponed the Palestinian legislative elections, some observers thought that there would be fierce electoral competition that could lead to political change. Others argued that elections are the only way to achieve national unity and end the Palestinian internal rift between Fatah and Hamas, the two dominant political movements in Palestine. But a closer look at what was going on in the electoral race betrays a different reality. The election was more likely to produce a “shamocracy” that would maintain the deep-rooted structures of oppression, tyranny and fragmentation. This is because the two political forces that have dominated the Palestinian political scene over the past 15 years and are vying for power again, have inflicted severe damage on the Palestinian national movement, depleted the national liberation project, and exacerbated vertical and horizontal fragmentation within the Palestinian society. As a result, over the decades, Palestinians have become mere observers of their plight and cause, unable to participate in political developments in their own communities. Indeed, their feeling of alienation in their homeland and estrangement from their government is a form of oppression tantamount to that inflicted by the Israeli colonial occupation. Palestinians need a government that liberates rather than enslaves them. When the elections are eventually rescheduled, Fatah and Hamas will once again try to monopolise the vote. The worst the Palestinian electorate could do is give them legitimacy again by voting for their candidates. This would only strengthen their positions and reinforce their authoritarianism, leaving Palestinians in their current predicament for years to come. But this is not an inevitable outcome. The elections, despite all their fundamental shortcomings, can still be an opportunity to transform the Palestinian political system, if approached differently. Political forces who desire genuine change in Palestinian politics should seek to steer the Palestinian public away from the disastrous choice of the status quo. They can encourage voters to punish the two dominant political powers and make space for the emergence of new political leadership. This would be the first step to holding them accountable at the grassroots level for undermining the Palestinian struggle. The punishment drive does not necessarily require that Palestinians vote for other lists in the elections. To demonstrate their rejection of the status quo, they can just cast invalid ballots that read “neither Fatah, nor Hamas”, “no to a pathetic political regime”, “no to corruption”, or “no to division”. With such no-confidence votes, opposition voices can consolidate in an act of resistance to expose ruling authorities and parties and send a clear message: “enough messing with our national project and future”. This would also constitute a fundamental rejection of the Oslo Accords framework and the political and governance regimes it created. Rejectionist, confrontational, and collective action requires exposing the authorities before the masses as a precondition for change. This electoral process can be used to pinpoint to the public the failings of the governing regime that they suffer from. It is an opportunity to change the people’s attitudes and perceptions, which will necessarily lead, sooner or later, to a change in their actions. For example, both Fatah and Hamas in their electoral campaigns are promoting themselves as the “protectors of the Palestinian national project” using either the “state-building” discourse or the “resistance” rhetoric. This is the time to expose both the fallacy of the “national project protector” notion, as it serves only as a pathetic coverup for all the damage and harm that both movements have inflicted on the Palestinians.
  18. he top 10 software for text analysis, analytics, and mining are changing the way text data is used “Data is the key to business success” is a statement well grabbed by small to medium and big organizations across the globe. Unfortunately, no company can sustain the digital blow without the help of sufficient information about its customers, employees, and other stakeholders. While on the mission to acquire insights, businesses come across all kinds of structured and unstructured data from various sources. Remarkably, text analysis, text mining, and text analytics play a big role in abstracting decision-making information. They stand out from other forms of resources like videos, images, and documents because oftentimes, customers write their opinions, reviews, and feedback in form of text after using the products or services. As companies came to know about the importance of structured text data, they started using software for text analysis, text mining, and text analytics. The technologies provide statistical pattern learning to find patterns and trends from text data. Analytics Insight has listed the top 10 software for text analysis, analytics, and mining that is changing the way text data is used. Top 10 software for text analysis, analytics, and mining DiscoverText DiscoverText delivers powerful enterprise text analytics to staff, students, and researchers in an easy-to-use and affordable way. The software leverages dozens of multilingual, text mining, data science, human annotation, and machine learning features. To make users quickly and accurately evaluate large amounts of text data, DiscoverText also offers a range of simple to advanced cloud-based software tools. The software provides access and sorting options to its customers from the unstructured text on market research, and associated metadata found in customer feedback platforms, emails, large-scale surveys, social media, and other forms of text data. IBM Watson Discovery Watson Discovery is an enterprise search tool and artificial intelligence search technology that breaks open data silos and retrieves specific answers to consumers’ questions while analyzing trends and relationships buried in enterprise data. The platform is well trained in the language of customers’ domain and applies machine learning technologies to process text. With Watson Discovery, users can ingest, normalize, enrich, and search unstructured data with speed and accuracy. RapidMiner RapidMiner is a powerful data mining tool that enables everything from text mining, text analysis, and text analytics to model deployment and model operations. The platform brings artificial intelligence to the enterprise through an open and extensible data science platform. Built with a special focus for the data team, RapidMiner unifies the entire data science lifecycle from data preparation to predictive model deployment. At RapidMiner, every analysis is a process, each transformation or analysis step is an operator, making design fast, easy to understand, and fully reusable. Google Cloud Natural Language API Cloud Natural Language API provides natural language understanding technologies to developers, including text analysis, text mining, text analytics, sentiment analysis, entity analysis, content classification, and syntax analysis. At the platform, natural language uses machine learning to reveal the structure and meaning of the text. Users can extract information about people, places, and events, and better understand social media sentiment and customer conversations. The natural language provision in Cloud Natural Language API allows users to analyze text and also integrate it with their document storage on Cloud Storage. Azure Cognitive Services Microsoft’s Azure Cognitive Services bring artificial intelligence within reach for every developer, without requiring machine learning expertise. The software embeds the ability to see, hear, speak, search, understand, and accelerate decision-making into users’ apps. Azure Cognitive Services work across all programming platforms and languages and help incorporate AI functionality into various applications with minimal effort and coding. Bismart Folksonomy Bismart Folksonomy is a next-generation tagging system that allows users to mine their datasets and gives them the information they want to acquire in an instant. The software highlights tags from natural language texts, images, videos, and audio files to help locate specific its. The advanced analytics feature in Folksonomy transforms non-structured files into structured ones, delivering insights. Bismart’s Folksonomy has special features like a user-friendly tool to merge synonyms, a white list to separate homonyms, the ability to create technical and customized dictionaries, and a blacklist to reduce tags. Apache OpenNLP Apache OpenNLP is an open-source natural language processing Java library. It leverages text analysis features like sentence detection, tokenizing, named entity recognition, part-of-speech tagging, lemmatization, chunking, and language detection. Apache OpenNLP product allows users to solve complex tasks of preparing and classifying text using machine learning methods. The project was developed by volunteers and is always looking for new contributors to work on all parts. TAMS TAMS, or Text Analysis Markup System, is an open-source program that allows users to quickly code sections of text. It is a convention for identifying themes in texts like web pages, interviews, and field notes. Although TAMS is most often employed to encode ethnographic documents such as interviews, users may also use this program to analyze pretty much any plain text. The platform leverages users to encode sections of texts in order to make them searchable within their corpus. KNIME Text Processing KNIME Text Processing was designed and developed to read and process textual data and transform it into numerical data in order to apply regular KNIME data mining nodes. Some of the features of the platform include natural language processing, text mining, and information retrieval. KNIME Text Processing imports textual data, processes documents by filtering and stemming, transform documents into a bag of words and document vectors, and finally cluster the documents based on their numerical representation. Gensim Gensim is an open-source library for unsupervised topic modeling and natural language processing, using modern statistical machine learning. It uses top academic models and modern statistical machine learning to perform complex tasks such as building documents or word vectors, corpora, performing topic identification, providing document comparison, and analyzing plain-text documents for semantic structure.
  19. Realme expanded its number series last month with additions of few more models in the form of the Realme 8 and Realme 8 Pro. And while the Realme 8 Pro offered premium features like an AMOLED display, super-fast charging and a 108 MP primary shooter, the standard Realme 8 follows a more balanced formula by offering just the right specs at just the right price (Starting Price Rs 14,999). In our review, we will take a look at how good the Realme 8 is and if it is a worthy successor to the Realme 7. Design The majority of the changes on the Realme 8 come on the outside. The company has opted for a new design and bold finish on the Realme 8. Whether you get the phone in Cyber Black or Cyber Silver, the ‘Dare To Leap’ branding is spread across the entire backside. Our model came in a silver finish with a glossy back panel. The silver finish has an edgier look, while the black colour looks more subtle. While I wasn’t a big fan of the edgy finish, it is certainly eye-catching. However, my gripe was not with the overall look, but with the glossy finish, which attracts fingerprint smudges from the get-go and takes away from the bold design. The back panel itself is plastic and so is the frame, although you do get some glass protection on the screen. There’s a box-shaped camera module on the back with four sensors. You get a SIM tray on the left and a volume rocker and power button on the right. On the bottom, there’s a speaker grille and a headphone jack. The Realme 8 looks quite appealing, but the back panel is too prone to smudges, which almost forces you to use the included cover.
  20. Valheim is a very successful survival game despite it only being an early release, but how does its map size compare to that of other survival games? Although Valheim has only been around for almost three months, it has already gathered a lot of attention from players interested in survival games. In this game by Iron Gate Studios, players have to prove themselves worthy of entering Valhalla by surviving in the dangerous land of Valheim. While this sandbox game is not officially released yet, Valheim has already become one of the most played games on Steam. However, the full release is expected to have considerable improvements including finishing some of the biomes. The game's map is approximately 314 square kilometers already, and it takes a little over four hours to cross it by foot. There is no question Valheim's map is big, but how big is it compared to other survival games? Facepunch Studios' Rust is another survival game that was fully released in 2018. This game is a multiplayer-only title, in which players have to watch their back for other users who might be roaming the area. The game's map is procedurally generated to further ensure uniqueness. On its early release, Rust's map was about 3.95 square feet. This means the island players get to explore in Rust is a little big bigger than the land of Valheim but still pretty similar in size. Another similarity between the two maps is that they are procedurally generated which means players will get to see a different map every time they start a new game. Minecraft Ever since its release in 2011, Minecraft has been a huge success. Players of all ages continue to play the game to this day, and it does not seem like they will stop any time soon. One of the reasons that Mojang's game has become so po[CENSORED]r is that Minecraft's map is extremely big, randomly generated, and it changes every new game. This is exciting because of the sandbox nature of Minecraft which means players will get to build in different settings every time. Comparing Minecraft to Valheim is a little bit different because Minecraft worlds are usually measured per block. Minecraft worlds are considered "infinite" because they do not have hard limits but get buggier as the player gets further and further away from the spawning site. The biggest world is 5120 square blocks and each block is a square meter, meaning the biggest Minecraft world is 55111.22 square feet making it considerably bigger than Valheim's map. One of the features that made this game so po[CENSORED]r is the attractiveness of No Man's Sky extremely big world. This title by Hello Games was highly anticipated, but it went through a rough start as the game was buggy. Still, No Man's Sky has continued to improve making the game more attractive. This space-set game generates planets procedurally as well. The game has over 18 quintillion planets meaning is about 20 times bigger than Minecraft's world. Some of its planets alone can be 9000 square miles. Some approximations say that it would take over 500 billion years to explore the full world, and while this is likely, maybe, hopefully an exaggeration, this game's map is considerably bigger than Valheim's. Valheim is available now for PC.
  21. Le septuple champion du monde a signé le meilleur temps de la deuxième séance d’essais libres ce vendredi, sur le circuit du Grand Prix du Portugal. Il a mis fin à une anomalie: le septuple champion du monde Lewis Hamilton (Mercedes) a dominé ses tout premiers essais libres de la saison de Formule 1 à Portimao, au Grand Prix du Portugal, vendredi après-midi. Avec un chrono de 1 min 19 sec 837/1000 lors de la deuxième séance, dans des conditions venteuses qui ont compliqué la vie de tous, le pilote Mercedes se place 143/1000 devant le Néerlandais Max Verstappen (Red Bull) et 344/1000 devant le Finlandais Valtteri Bottas (Mercedes). Le matin, ces deux derniers s'étaient échangé la première place, Bottas terminant devant en 1 min 19 sec 648/1000, soit 25/1000 seulement de mieux que Verstappen. » LIRE AUSSI – F1 : Valtteri Bottas sur un siège éjectable chez Mercedes Comme attendu, c'est donc le duel naissant de 2021, Hamilton contre le Néerlandais de Red Bull, qui se dessine pour la troisième manche du championnat sur le vallonné circuit de l'Algarve, où la F1 n'a couru qu'une fois en octobre dernier. Avec un succès chacun, les deux font quasiment jeu égal au championnat du monde, avec un petit point d'avance pour le Britannique (celui du meilleur tour du GP d'Emilie-Romagne). En qualifications, ils sont également à une pole chacun. Pour Hamilton, déjà recordman de l'exercice, ce serait la... 100e s'il terminait devant samedi à 15h00 locales (16h00 françaises) ! Avant cela, il faudra encore passer par les essais libres 3 à 12h00 (13h00). «Nous avons l'air très proches, je pense que ça va être serré», anticipe le Britannique, qui ajoute tout de même en avoir encore sous la pédale. «Je ne sais pas comment s'est déroulé le tour de Max mais le mien n'était pas parfait, assure-t-il. La voiture peut être encore plus rapide et il y a des améliorations possibles. Mais je suis sûr que c'est le cas (de Red Bull) également.» «Ça a encore l'air serré avec Mercedes, abonde Verstappen. Les sensations ne sont pas mauvaises mais il y a toujours du travail à faire.» «Celui qui gagnera le plus temps pendant la nuit sera en pole», conclut Bottas. » LIRE AUSSI – Toyota règne en maître sur les qualifications à Spa, Alpine en retrait Derrière, Ferrari plutôt que McLaren se dégage comme la troisième force, avec le Monégasque Charles Leclerc 4e le matin et l'Espagnol Carlos Sainz Jr à la même place l'après-midi. Fer de lance de l'écurie britannique en ce début de saison, l'Anglais Lando Norris reconnaît «être à la lutte sur certaines portions de la piste» et pointe une autre menace: Alpine, avec l'Espagnol Fernando Alonso et le Français Esteban Ocon, 5e et 6e des essais libres 2. «Nous sommes dans une bonne position (mais) ça va être une bataille serrée, vu comme les chronos sont proches», note Ocon. Son compatriote Pierre Gasly (AlphaTauri), 11e des EL2, estime que sa monoplace a «des difficultés dans tous les domaines par rapport à d'habitude». Silence sur les réseaux sociaux Hors-piste, ne comptez pas suivre toute l'action de ce week-end sur les réseaux sociaux. Hamilton, champion de la lutte contre le racisme en F1, Leclerc, Verstappen, Bottas, Lando Norris, George Russell ou encore Esteban Ocon se sont joints au mouvement de boycott lancé par le football anglais. Leur objectif: protester contre la haine en ligne. La Formule 1, pour sa part, a indiqué jeudi à l'AFP soutenir cette action sans y participer, contrairement à l'UEFA, l'instance dirigeante du football européen, ou encore à la Fédération internationale de tennis (ITF). Les injures et insultes envers les joueurs de foot sur les réseaux sociaux ont été multipliées par 4,5 depuis septembre 2019, a dénoncé vendredi Manchester United. 86% des publications visées comprenaient des injures racistes et 8% étaient homophobes ou transphobes.
  22. Most diseases stem from — and get aggravated by — poor lifestyle choices. Ever since the pandemic happened, people have tried to give more attention to their overall health, so as to stay disease-free. Many people around the country continue to seek treatment for cancer, and among them, colorectal cancer is one which needs urgent attention. Dr Rahulkumar Chavan, consultant surgical oncologist, Hiranandani Hospital Vashi — A Fortis Network Hospital — says that colorectal cancer (CRC) is a disease which can be attributed to the “negative impact of changing lifestyle and food habits”. “It is often called a ‘western lifestyle disease’. Consumption of tobacco, alcohol, a diet high in processed meat and low in fiber, obesity, and low physical activity are common causes of colon cancers,” he says. What is it? Dr Chavan explains that colon is a part of our digestive system. As the food keeps passing along the digestive tract, nutrients within it get absorbed. The colon, also called large intestine, turns the liquid form of unused food into solid by absorbing water, which is expelled as feces or stool. ALSO READ |Covid 19: Bhagyashree shares home remedy to combat chronic fatigue, muscle soreness CRC is a disease in which cancer cells form in the tissues of the colon or the rectum. “Our body has around 30 trillion cells which have pre-programmed rules governed by genes about how to behave. In simple words, cancer cells have psychopaths within; they defy all these rules and multiply fast, grow out-of-control, invade nearby structures of our body, and can spread to a distant location within the body in late stages to form a new tumor,” he says. Signs and symptoms It is generally seen in a po[CENSORED]tion above the age of 45, with complaints of changes in bowel habits like repeated history of constipation or diarrhea, bloating, the passage of blood or mucus in the stool, unexplained weight loss, easy fatigability, pain or swelling in the abdomen or drop in hemoglobin levels, during routine investigations. Persons with a strong family history of CRC may have these features in their 20s, too. ALSO READ |Fatty Liver: Simple dietary strategies to manage the condition Delayed diagnosis “Compared to other digestive cancers, CRC has a far better prognosis; still, early diagnosis is of utmost importance. Often, it could be a minor sign or symptom, such as a change in stool colour to red or black, which should be immediately consulted. At times, bleeding from the rectum is assumed to be due to common ailments like piles,” the doctor explains. Early screening for early detection Colonoscopy at the age of at least 45 is important for the average-risk po[CENSORED]tion. Patients having a family history of CRC or those showing signs/symptoms should seek early colonoscopy after discussion with a doctor. The frequency of repeat colonoscopy will be decided in accordance with the patient’s age, family history, and findings of the first colonoscopy, Dr Chavan says. ALSO READ |Should you add bell peppers to your salad? Lifestyle modification for prevention * Avoid smoking and alcohol consumption which are carcinogenic. * Studies suggest avoiding high-calorie foods, red and processed meat also reduce the risk of CRC. * Perform regular exercise with moderate intensity (for at least 30 minutes). * Maintain healthy body weight to minimise the chances of CRC. * Eat a diet rich in vegetables, fruits and whole grains which are high in fiber. High-fibre diet not only reduces the risk of CRC, but also of heart diseases.
  23. JERUSALEM Palestine on Friday extended its condolences to Israel over a deadly stampede during a Jewish religious festival in northern Israel. Palestinian President Mahmoud Abbas sent a condolence message to Israeli President Reuven Rivlin. In his message, Abbas expressed sorrow over the loss of lives as a result of the stampede during a religious holiday celebration on Mt. Meron. Palestine prays for those who lost their lives and for the families of the victims, Abbas said. He wished a quick recovery to those injured in the incident. Israel declared Sunday a day of national mourning after at least 45 people were killed early on Friday in the stampede during celebrations of the Jewish holiday of Lag B'Omer. * Writing by Zehra Nur Duz in Ankara
  24. Patenting a piece of software can be a pain. The UK Patents Act rules that ‘software as such’ cannot be patented unless certain criteria are met, while the European Patent Convention states that software can be patented only if it achieves a technical contribution to the ‘prior art’ – all of the publicly available information before a given date that might be relevant to the patent’s claims of originality. Essentially, software as such is not patentable unless the developer can robustly demonstrate a technical effect associated with it. But what does this mean for medical software patenting? Medical Device Network catches up with Potter Clarkson senior associate patent attorney Esmé Swindells about the impact of the pandemic on medical software patenting, the difficulties with in silico data and why patenting is so important for digital health companies. Chloe Kent: Why is medical software patenting so difficult? Esmé Swindells: European practice is that you can’t necessarily patent software per se, it needs to have some kind of technical effect associated with it. In theory it shouldn’t be too difficult to patent your software in the medical field, as it’s not actually too difficult to find a technical effect associated with your software. I think the issue is typically with working out how to draft your claims, making sure you hit that technical aspect that’s required by patent officers. CK: How can a digital health company ensure that its technology is patentable? ES: Work very closely with patent attorneys. For some of our clients we’ll have quarterly meetings and we’ll run through all of their current portfolios to see where their projects are at, but also importantly talk through other research programmes. Obviously patent attorneys know the law better, so they’ll pick up on things that might be patentable, bits of data that sound interesting. It’s a case of making sure, before you file your application, that you assess what the technical feature is and have data that supports this. It wouldn’t be enough in the application to just say ‘look, we’ve used this AI algorithm’, you need to demonstrate the medical technical effect. Data is key for getting patents through in the medical field. Something that’s obviously come out of AI is the in silico data, where algorithms are used in the drug discovery phase to target a particular disease. At the moment it’s a fine line between when you file your patent application based on in silico data and when you’re going to get in vivo data. We typically say to applicants not to solely rely on in silico data, and we try and get in vivo data before filing the main application. If you’ve got in vivo data in your application, the patent is very likely to go through to grant, whereas in silico data is unlikely to hit the sufficiency requirements at the moment.
  25. Four years ago, when Charlie Boyle was about a year into his job running the unit of Nvidia that makes and sells full AI hardware solutions, many IT and data center managers were intimidated by this new class of hardware. Nvidia DGX systems – essentially AI supercomputers – are large, powerful, and gold colored. People that would have to support them in their data centers were worried about the hardware’s power density and intimidated by the presence of InfiniBand (an interconnect technology from the supercomputer world). Generally, they thought the systems would take some learning to get a handle on, and who in IT has time for that? Related: Nvidia Is Designing an Arm Data Center CPU for Beyond-x86 AI Models So, conversations Boyle used to have with customers often tended to start with him explaining that not only were these systems nothing to fear, but that they were designed to require no more, if not less, than the amount of IT personnel hours their typical enterprise servers required. “To an IT administrator, it’s just a bigger Linux box,” he said. “They shouldn’t be scared of it. It’s not esoteric.” Related: Nvidia Rolls Out Slew of New AI Hardware for Data Centers We recently interviewed Charlie Boyle, an Nvidia VP and general manager of the company’s DGX Systems unit, for The Data Center Podcast. We talked about what the arrival of AI hardware means for an IT organization and its data center staff, the role liquid cooling will inevitably play in data centers of the future, Nvidia’s own experience operating a massive AI hardware cluster across multiple data centers, AI computing infrastructure in the cloud vs. on-prem vs. in colo, and more. Listen to the entire conversation here or wherever you listen to podcasts, and be sure to subscribe: Nvidia’s internal DGX cluster, referred to as “Saturn 5,” has now grown to about 3,000 systems. And it’s managed by “very few IT staff,” Boyle told us. The IT organization is “smaller than you would expect for an organization that’s running thousands of high-end servers. Because at the end of the day, they’re all the same. It’s an IT administrator’s dream.” AI Hardware in the Data Center While for IT staff a DGX system is “just a bigger Linux box,” this infrastructure doesn’t necessarily mean business-as-usual for data center managers. “Empty is the new green,” is how Nvidia spins the fact that a typical enterprise data center will have a lot of empty rack space around each DGX system deployed, because there likely won’t be enough power in a single rack to support more than one of these GPU-stuffed boxes. “Empty” and “green” in this scenario means that while you’re left with a lot of empty space, the space that isn’t empty is occupied by a box that has enough compute to replace thousands of Intel x86 pizza-box servers, Boyle said. A few years ago, he and his colleagues liked to say that a single DGX replaced a hundred or a few hundred x86 boxes. They no longer say that. “We don’t even make those comparisons anymore, because the number is into the thousands at this point,” he said. Yes, a single DGX may take a lot more power than a few pizza-box servers occupying the equivalent amount of space, he said, but “with good planning, it doesn’t really matter… as long as you can fit at least one of these systems in a rack.” The more space-efficient ways to deploy this hardware involve higher rack densities, including, ultimately, racks using some flavor of liquid cooling. The latter is something everyone designing a data center today should at least plan for being able to add in the future, Boyle advised. Listen (and subscribe!) to The Data Center Podcast with Nvidia’s Charlie Boyle below, on Apple Podcasts, Spotify, Google Podcasts, Stitcher, or wherever you listen to podcasts. PODCAST TRANSCRIPT: Charlie Boyle, VP and GM Nvidia DGX Systems Yevgeniy Sverdlik, Data Center Knowledge: Hey, everybody. Welcome to the Data Center Podcast. This is Yevgeniy, editor in chief of Data Center Knowledge. Today we're lucky to have with us Charlie Boyle. He is a VP at Nvidia. There, he runs the DGX systems unit. That unit makes and sells Nvidia supercomputers for AI, lots of really cool stuff we're going to talk about today. Charlie, thank you so much for talking to us today. Charlie Boyle, Nvidia: Thank you and glad to be here. Yevgeniy Sverdlik, Data Center Knowledge: Let's give our listeners a bit of your background. You came to Nvidia five years ago, from some LinkedIn investigation that I've done, after six years at Oracle and you ended up at Oracle because Oracle acquired Sun Microsystems, your previous employer. Is that correct? Charlie Boyle, Nvidia: Yeah. Yeah. I've been in data center infrastructure servers, data center system management, for a lot of years at this point. I don't want to date myself too much but before Sun, I was running service provider data centers and doing engineering R&D for them and then started doing products at Sun, which transitioned over to Oracle and then, obviously, with Nvidia, we had a great opportunity to build AI systems from the ground up as a complete solution and that's really what I like to do is deliver a full solution to a user that's going to solve their problems and in a way that's different than what they've got today. Yevgeniy Sverdlik, Data Center Knowledge: Yeah. Let's rewind a little bit. How did you end up at Sun? Charlie Boyle, Nvidia: Back in my service provider days, as we were running what you would call managed services today for web hosting, ran data centers for thousands of servers worldwide and my data center platform was kind of split between Sun and Solaris boxes and at the time, this will date me a little bit, Windows NT boxes. I knew a lot of folks at Sun because we hosted thousands of Sun servers and had some of the biggest brands on the internet at the time running Sun and Solaris servers. Charlie Boyle, Nvidia: I had moved out to California for a startup to do something interesting in the voice data center space. Then as I was at a joint Sun event, they were promoting a new platform, a data center management platform, if you will, back in the day and this would have been 2002, that sounded really interesting and revolutionary and I knew enough people at Sun, I phoned up a few people and said, "Hey, that thing that this engineering VP just announced, I don't know him but I'm interested in it. Do any of my friends know this guy?" Some of my good friends at Sun were like, "Oh, yeah, I know him well. I'm his marketing counterpart, I'm his product counterpart." Charlie Boyle, Nvidia: We made introductions and joined Sun to work on converged data center platform software, at the time, and had a great career there doing that, doing system management, virtualization, and then also eventually running the Solaris product at Sun that at the time was widely used across the entire high-end unit space. Yevgeniy Sverdlik, Data Center Knowledge: That was, the product that we're talking about, that made you curious about joining Sun was the converged data center? Charlie Boyle, Nvidia: Yeah. At the time, it's a product that probably nobody currently has ever heard of. It was called N1 was the initiative. We had a couple acquisitions. We learned a lot. Back in the day, it was a grand vision of a single product managing everything in the data center from servers, network storage and easy control, plain. I think it was a little early in its time back in the early 2000s. That's a really tough problem to crack. Charlie Boyle, Nvidia: We learned a lot along the way from our customers and eventually spun out a number of smaller scope products that really helped customers and then leading to a bunch of data center management setups, including virtualized systems and, at the time, how do you use the built-in virtualization inside of the Solaris operating system better across the data center? Yevgeniy Sverdlik, Data Center Knowledge: Similar in spirit to what the actual converged systems that came after that became hot and then morphed into hyper converged [crosstalk 00:04:28]. Charlie Boyle, Nvidia: Yeah. Yevgeniy Sverdlik, Data Center Knowledge: Perhaps a bit early for ... Charlie Boyle, Nvidia: Yeah. That was a bit early. A lot of that foundation became the converged systems that we were working on at Oracle, things like exodata, ExoLogic, the private Cloud system that they had. All of those were hyper-converged systems but just in a much larger form [inaudible 00:04:49]. Those were rack level designs but to solve a very specific customer problem. Yevgeniy Sverdlik, Data Center Knowledge: You're not the first person that I interview and then I start researching them and it turns out they came up through Sun, ended up at Oracle, and so to ask this question, how did things change at work once Oracle took over Sun? How did things change for you? Charlie Boyle, Nvidia: Of any acquisition, there were positives and negatives. I really appreciated Oracle's sales culture, they want to win, they were very aggressive. Lots has been written of the Sun going down in the final years. Oracle really had a passion for getting things out to their customers and really monetizing things well. Yevgeniy Sverdlik, Data Center Knowledge: They have the business end of things down. Charlie Boyle, Nvidia: As an Oracle customer, lots of times people say they monetize things a little too well but at the same time, it was a very strong culture but they had a very strong engineering presence as well. Lots of bits and pieces of things that used to be Sun got assimilated into Oracle. I actually moved ... When I joined, I didn't join the hardware division at Oracle because I was responsible for Solaris virtualization and system management. I joined the software team, which was a core part of Oracle so, to me, it was just like, "Okay, I'm just a part of core Oracle now" and it was a positive experience in a lot of ways and, of course, there's a lot about culture and other things. It fit fine for me. Some people didn't like it as much but, for me, it was fine. Yevgeniy Sverdlik, Data Center Knowledge: You led development of convergence restructure products at Oracle. You're now running AI infrastructure business at Nvidia. How, maybe in general terms, this may sound like a weird question to you but do your best, how is the world of AI infrastructure, that business, that ecosystem, different from the world of converged infrastructure, maybe the more traditional hardware world? Charlie Boyle, Nvidia: At the base, they share a lot of the same fundamental things. I mean, the reason that people like converged infrastructure is from whatever partner you're buying it from, whether it's converged storage or converged compute solution, the value to the end user is it's as easy as possible to get the most benefit out of it. You don't spend a lot of time researching hardware, setting things up, updating things. Charlie Boyle, Nvidia: The vendor has done all that work for you and when we started DGX, and I can't say that I started DGX, the product was under development before I joined Nvidia, I joined probably three weeks before the initial product launch, the vision and part of what attracted me to Nvidia was the vision was very similar, which was we didn't want to build a server. We wanted to build the best AI system for users at the time and back five years ago, really nobody knew what dedicated AI infrastructure was. Charlie Boyle, Nvidia: You had a lot of people experimenting, you had a lot of folks in research and one of the things that we noticed internally at Nvidia is even our own experts in the space, everyone has something different underneath their desk, they were trying different things and Jensen's vision was very straightforward is there's one thing that is the absolute best to do AI work and I want to build that and I want to show the market that so that everyone can learn from it and we can expand the overall AI ecosystem. Yevgeniy Sverdlik, Data Center Knowledge: How did you end up at Nvidia? How did that transition happen? Charlie Boyle, Nvidia: It was through mutual friends and colleagues, some of which were already at Nvidia, and they said, "Hey, we're doing something new here." Nvidia had always been known as a fabulous technology company but more of a chip company at the time. We sold data center infrastructure but it was chips and boards to OEMs and the gaming side, of course, through our AIC partners but as we wanted to bring new technology to the market in a rate that was potentially faster than traditional system builders were comfortable doing those things, we said, "If we're going to tell the market that AI is real and that you should invest in it now, we need to show them how to do that. We need to teach them how to build a world class AI system" and at the time, Nvidia was just announcing its new Pascal Generation of GPUs, which had a lot of never before seen technology. It was the first time you saw NV Link, a private interconnected link to all these different GPUs together. Charlie Boyle, Nvidia: There was also a new form factor. Of course, everyone is familiar with the PCI form factor. You put those cards in the system. In order to get the power of the NV Link to connect all the cards together it was actually a different physical form factor. It's what we call SXM. It's a rectangular module. You've seen all the pictures of it. But that form factor needed a new system to be built around it. You couldn't plug it into a standard PCI server so we had to teach the world how to build this new form of server that combined the fastest GPUs with this new NV Link technology and that's what became the DGX 1 product. Yevgeniy Sverdlik, Data Center Knowledge: Was that what drew you to Nvidia, an interesting product you wanted to get involved in? Charlie Boyle, Nvidia: It's a combination of a lot of things. You know, interesting product, interesting space, also a great team. As anyone would tell you after you've been in a career for a while, the team is as important as the work that you're doing and Nvidia just had a great engineering culture, a great innovative culture. I've looked for that in all the companies that I've worked with. The team was great there. I talked to them about what they were doing, why they were doing it. It just felt like a natural fit. Charlie Boyle, Nvidia: As I said, I had some friends that were working there, some of them had been working there for a few months before I got there, some had been there for a few years and all of them said this is a great place to work if you're used to X company back in this point in time, they're very much like other places that I had worked. It really had that spirit of just everyone working together for a common goal to get something done and to do something that hadn't been done before. Charlie Boyle, Nvidia: That's kind of one of Jensen's mandates to all of us is if we're going to invest time in doing something, it has to be both hard, and it has to be something that only we can uniquely do and it also has to be fun to work on. You meet those three tests for a product and it's a great opportunity. Charlie Boyle, Nvidia: I started as an army of one but quickly hired a lot of great folks that I knew through the industry and that I had worked for before and fast forward five years, we're on our third-generation product and have a great team behind it. Yevgeniy Sverdlik, Data Center Knowledge: There's a lot of attention being paid to AI. Obviously, it's a world-changing technology, there's a lot of optimism and a lot of concerns about its implications and the basic question is how do we make sure it's not built early on in ways that will cause irreparable harm to society. How do we make sure it's not putting minorities at a disadvantage or how do we prevent it from being abused by military? How do we prevent it being abused by deep fakes, things like that? Yevgeniy Sverdlik, Data Center Knowledge: As someone who lives in the world of computing infrastructure that enables this technology, do you feel a sense of responsibility for ensuring we don't get it wrong or do you not worry about those things because your focus isn't on the actual software that trains the AI models and makes those potentially consequential decisions? Charlie Boyle, Nvidia: Well, I mean, as a consumer of AI technology, we all interact with AI technology on the consumer level on a daily basis. I hear those concerns, but the thing that I find great hope in is all of the developers, the ecosystem partners, everyone that helps make AI a reality, they're all thinking about these things too or trying to take out bias from systems, try and make sure that AI can really help you. Charlie Boyle, Nvidia: The whole goal and you've heard Jensen say it a lot of times but I think it really comes to heart is AI isn't out there to replace people. It's to make people better. It's to help people along so that your life can be easier, your life can be better, you can get better access to information. Charlie Boyle, Nvidia: Back when we were all traveling and you're sitting on a phone call on hold with an airline because your flights got canceled, well, if the AI is better, it should just rebook your flights because it knows your travel pattern and everything and I shouldn't have to sit on the phone with an airline for an hour. Charlie Boyle, Nvidia: There's a lot of good things in it. There's a lot of people out there that will point out it has potential to do bad things. The same things have been said when the computer industry just started, "Oh, you're going to put all these people out of work." All these things. Look at the massive economic gain that has come out of just everyone having access to a computer system. I have high hopes for AI and I think the right people are thinking about the right areas to help make sure there's bounding boxes around things, to make sure that there's human in the loop when you're facing critical and difficult decisions. Yevgeniy Sverdlik, Data Center Knowledge: The obligatory chip shortage question, how has the chip shortage been affecting the GX group, the group you're in charge of? Charlie Boyle, Nvidia: It impacts everyone. If I had a hat on, I would take my hat off right now to our Nvidia operations team. They are I think ... Not I think. I know they're the best that I've ever worked with. While there are shortages, obviously, we build our own [inaudible 00:14:59] so I'm not short of those for the very large scale systems but it's little things. It's like resister, it's a little transistor somewhere, it's a power module but I think coming into Nvidia and understanding how well our operations staff plans, plans years in advance on stuff, they've really been able to protect us on the things that we can be protected by. Charlie Boyle, Nvidia: I can say for DGX supply, while there's a lot of hard work going on behind the scenes to make it all smooth and perfect, we haven't been impacted at the output end of that. That doesn't mean that we haven't had to do a lot more work but the team has been excellent at their craft to make sure that we always have second source, we always have alternative components. We've got months and months of planning behind every build of systems that happened in our factories for these things because at the end of the day, we're not delivering a part to a customer with DGX. We're delivering a whole solution, a whole system that's got software, it's got storage and it's got networking. Charlie Boyle, Nvidia: We can't be short one screw in the system. One screw means I can't ship the system to a customer so the team just does an excellent job forecasting, planning for that. It's really great to see that whole process and be part of it to make sure that we can deliver to our customers what they need. Yevgeniy Sverdlik, Data Center Knowledge: You're saying it hasn't caused any delays in shipments but there's been a lot of worrying and headaches and hard work behind the scenes to make sure ... Charlie Boyle, Nvidia: Yeah. I mean, our goal is to shield our customers from that. They expect a complete product from us. We plan well with them. That's one of the things I really like about working on this type of product is even though all of our systems are sold through partners and channels and those things, we have a lot of direct visibility with our customers. They know what our lead times are. Charlie Boyle, Nvidia: I'm very happy to report all throughout the chip storage and coronavirus crisis, we've still standardized on our lead time and haven't extended that so our customers know when AI is successful for them and, say, they've started out with just a handful of DGX systems, when they come back to me and say, "Hey, Charlie. I need 40, I need 100, and I need them next quarter", I can still deliver that stuff in standard lead time. Yevgeniy Sverdlik, Data Center Knowledge: Okay. Let's talk about data centers. You like to say that your data center AI infrastructure product design benefits from Nvidia engineers using Nvidia hardware internally. How big is that internal AI computing infrastructure at Nvidia now? How many data centers? Charlie Boyle, Nvidia: Oh, how many data centers? Yevgeniy Sverdlik, Data Center Knowledge: How do you quantify how much power [inaudible 00:17:56]? Charlie Boyle, Nvidia: I can quote you the number of DGXs. We're probably close to 3000 DGXs deployed internally across the range of DGX 1, DGX 2s and the current system, the DGXA 100. I don't really count the DGX station numbers in there because we have those sitting under people's desks and cubes so they're not in the data center. Charlie Boyle, Nvidia: In the data center, for the data center infrastructure that we refer to as Saturn 5, which is all of the Nvidia clusters put together, that's around 3000 systems. It's in multiple data centers. Off the top of my head, I don't know exactly how many. They're mostly all around Santa Clara. I think there's some development that's out of state, probably in Nevada, at this point, but we're probably more than five, less than 10 data centers. We try to keep a lot of our systems together because you get economies of scale but you also get ... As you put more and more of these AI systems together, in close proximity, you have a lot more flexibility for your users. Charlie Boyle, Nvidia: One of the things that Jensen talked about in the keynote was these massive new models, whether it's recommender or speech. Some of these things that would take hundreds of machines to try in a ... It would still take them two weeks to train even with a couple hundred DGXs. You need all those systems physically close to each other because you couldn't put those, say, 500 systems in 100 different data centers. It wouldn't be efficient because all of those systems talk to each other over a local high speed InfiniBand network. You want to have centers of gravity around your AI system deployments. Yevgeniy Sverdlik, Data Center Knowledge: You're splitting workloads across multiple systems, right? Charlie Boyle, Nvidia: Right. I mean, those same 3000 or so systems, all of those things are accessed by everyone inside of Nvidia. One of the interesting things is just by being an Nvidia employee, you automatically have access to up to 64 GPUs in the data center. Now some groups, like ourself, driving car language model teams, those teams have hundreds, if not thousands, of GPUs but that's why Jensen made the investment many years ago. Charlie Boyle, Nvidia: We started with 125 DGX 1s four plus years ago and as soon as we turned on that centralized infrastructure, it was instantly full and so then we kept adding and adding and adding to it because we found the value of having a centralized infrastructure means that people don't have to worry about when am I going to get access? How long is it going to take me to do something? If the systems are available, they can get their work done and they can expand and contract as they need it. We've put a lot of work together to make it easy for our internal users and that same technology, that tooling, that information flow, actually makes it into the product so all of our customers outside of Nvidia can use it. Yevgeniy Sverdlik, Data Center Knowledge: These clusters sit in co-location facilities? Charlie Boyle, Nvidia: Yes. I can't say Nvidia owns no data center space because, of course, we run our GFN network all over the world for gaming but for our DGX systems, outside of the ones that are in labs in Nvidia buildings, all the large clusters are in co-location because that's not ... While we have strict design specifications and we push our co-location partners to the art of the possible and push the limits on power and cooling, it's not our core business to build data centers. We let the experts build that and we give them all of our requirements to push not only for what they're building today but what they're building in years to come. Yevgeniy Sverdlik, Data Center Knowledge: Is Nvidia the world's largest user of DGX? Charlie Boyle, Nvidia: No. Yevgeniy Sverdlik, Data Center Knowledge: No? Charlie Boyle, Nvidia: It's not. I have customers that have larger deployments than we do. Yevgeniy Sverdlik, Data Center Knowledge: Okay. Can you share how many or who they are or anything about them? Charlie Boyle, Nvidia: I can't. Unfortunately, that's the power of when you have very large customers. You can search through some public stuff on some large customers that we've announced over the years but there are a handful of customers that do have more internally than we do. Yevgeniy Sverdlik, Data Center Knowledge: This internal cluster, the first internal cluster you mentioned there were 125 DGX computers, about five years ago, you deployed it. How did that project come about? Why did you guys decide to do that? Charlie Boyle, Nvidia: That was really based on an early Jensen meeting about how do we get our own internal users something better? Because as he looked across the company, and I can take no credit for this, I just helped implement it, there were all these requests for a work station here, a server there, from all different parts of the company that was trying to do something with AI and we knew we wanted to centralize things but all of us that had been in the IT industry for a long time, going from when people owned systems, "This is my system. It sits underneath my desk" or it's in this one rack space, "This is mine" to go from that to something that is a centralized shared resource, every IT organization struggles with that. How do you do that? How do you get your users to move? How do you motivate them to move so that you can decommission the ineffective barely used systems that they do have? Charlie Boyle, Nvidia: We just made a very simple proclamation backed up by Jensen's investment in it is we're going to give you ... To all of our users in the company, we're going to give you something that is so much better than what you have access to today in your lab or under your desk, that there's no reason you'd want to keep doing it the old way. Yevgeniy Sverdlik, Data Center Knowledge: No brainer. Charlie Boyle, Nvidia: Like I said, once we turned it on, I think it was turned on on something like a Thursday or a Friday and people started to know about it. By Monday morning when people came in, it was already 100% utilized. Then we needed to apply a bunch of software and other access controls to it because early on, the guy that got in Monday morning and launched 1000 jobs took up most of the cluster. Then we said, "Okay, it's great it's getting used. Now we need to make sure that it's used fairly across the company" and we started writing software, writing quotas, writing queuing, but it's super successful. Charlie Boyle, Nvidia: Anyone inside the company, you want to try an experiment, you've got a great idea, can try something no approval. Like I said, up to 64 GPUs, that's up to eight DGXs but for the big stuff that we do, if we say, "Hey, we could really make a difference on a medical language model and we need 1000 GPUs to try it on for two weeks", that's something we can do. Having that type of infrastructure really unlocks the imagination of the data scientists we have, the researchers we have, to push the limits of what's possible and that's what you see us ... Not only did we have some hardware announcements at GTC but most of Jensen's announcements were actually software-based and that's because we can do that work and we have the infrastructure. Charlie Boyle, Nvidia: What I've experienced over the past number of years is a lot of our customer base doing the same thing. They're going from disparate infrastructure, one group gets to buy a DGX here, another group gets two there, to saying, "You know what? I just want to buy" what we call a super pod that starts with 20 notes and up, "And I'm just going to use it as a centralized resource for my company and when I need more, I just add to it. I don't need to go give Bob two more or Lisa two more. Just, 'Okay, if the company needs four more, just put them in our cluster" and that's the great thing about the designs that we've come up with. It just scales linearly. You need more systems? We've got a road map that goes from two systems to thousands of systems all in a standardized data center deployment. Yevgeniy Sverdlik, Data Center Knowledge: You're now at several thousand DGXs being used internally. What are some of the biggest lessons you guys have learned about running and using AI hardware at scale in data centers over the last five years? Charlie Boyle, Nvidia: A lot of it has been around software development to make it easy to run very large scale AI jobs. If you're running on a single GPU, even a single system, there's really no software work to do. You run your [inaudible 00:26:31], you run your TensorFlow, whatever framework you're going to use, and you're done. There's nothing that needs to happen. Charlie Boyle, Nvidia: When you start doing what we call multi node training, scaling that training across lots of systems, 200, 400 of systems, there's things that you need to learn in the way you launch those jobs, monitor those jobs, what happens if you're doing a two week training run and one system of your 100 system cluster goes down? Charlie Boyle, Nvidia: A lot of our enhancements to software that we released to customers through our NGC repository, we've taken all of our own internal lessons learned on how to optimize those things and just how to operate those. Charlie Boyle, Nvidia: A lot of what we deliver is knowledge, not even just software. We help teach our customers, okay, if you're going to try to run thousands of systems and you need to run a job that's going to run for an hour on those thousand systems, that's a fairly ... While it's a big system, it's a fairly simple thing. You don't need to think about a lot of things. For automotive training, they may have jobs that run over hundreds of systems that last for three weeks. Charlie Boyle, Nvidia: Now you don't want one node in that system on your third week to destroy your entire training run so there's things like check pointing, there's saving various things, saving various states. As you're running very long-term jobs, you know just like standard statistics, averages, after so many days, weeks, or months, you're going to have a system hiccup when you're running hundreds or thousands of systems. Charlie Boyle, Nvidia: We do a lot of to help provide the software, the settings, the tools, so that customers can feel safe running very long running jobs over hundreds of systems without worrying about a single failure in the cluster is going to take out your three weeks of work. Yevgeniy Sverdlik, Data Center Knowledge: That's an interesting point because kind of the traditional thinking in the HBC world, which I know is different from the AI world but it's very closely related, is that infrastructure isn't as mission critical as, say, a bank's data center and so you don't need to build as much redundancy on the data center side of things but what you're saying is, "Well, if you have a long running training job, that's very mission critical and nothing can fail there" or that whole run is screwed. Charlie Boyle, Nvidia: Yeah. The high availability is different because a lot of people come from an enterprise background. Obviously, I was at Oracle. Your average database transaction is sub-second so if you have a system failure and you got a fail over and you have to retry a couple transactions that were sub-second, no big deal. You didn't lose anything. Charlie Boyle, Nvidia: If you're looking for a singular answer at the end of multi weeks, you need to build some redundancy in the software layers so that things can retry, reset and those are the knobs that we help teach our customers to [inaudible 00:29:34] to say, "What's your risk exposure? How often do you want to basically recollect all your data and say I'm at a checkpoint here?" Some people checkpoint once a day because it's fine for them if they lose work. If they rerun a day's worth of work on a node, it's not that big of a deal. Some customers checkpoint something every hour and it's really up to their risk profile and how much they think it's going to impact the overall run. Charlie Boyle, Nvidia: Of course, if you're running on hundreds of systems and you lose one hour's worth of data on one system, your overall training time isn't going to go up that much but if you're running hundreds of systems and you lose an entire 24 hours, that's a bigger impact. That's why having that close relationship with our customers, that as being a DGX customer, I tell everyone you're part of the family now, you're part of my family and you can call us up and ask us data center questions like what's the optimal way to cool my data center? How do I do [inaudible 00:30:32]? What do you recommend? All the way through to what's the right way to run a Pi Torch job on 200 DGXs for a massive NPL language model? Charlie Boyle, Nvidia: Half of our ... It's not an exact number but I would venture to say half of our people in our technical SA community have advanced degrees to PHDs. I was shocked coming from Oracle and Sun where we had excellent technical field people to talking to the average technical field person at Nvidia and they've got a PHD in linear algebra or computational microscopy or other things. It's just a fabulous team with just an immense amount of knowledge. Yevgeniy Sverdlik, Data Center Knowledge: Hypothetically, I'm an IT or data center manager at a corporation, some unit in my company wants to start training and deploying AI models. They haven't figured out what infrastructure that they're going to use, maybe they're going to use Cloud or something else. Help me understand what's the spectrum of scenarios of how this affects me as a data center manager, as an IT manager. Charlie Boyle, Nvidia: Yeah. There is a broad spectrum of how people do AI. Lots of people start in the Cloud and Nvidia has its top end GPUs in every single Cloud and if you're just trying, experimenting things, you don't know what's going to work yet and it's easy enough for you to get your data into the Cloud, that's definitely a great way to get started. Charlie Boyle, Nvidia: It's a lower cost entry point in a lot of ways but what we generally find from customers is as they get serious about AI training, doing that bit of work, that they want something that's closer to them, closer to their data. Now that brings up the data gravity question, if a customer comes to me and says, "Hey, Charlie, I've been an online business since we started the business and all of my data is in Cloud", well, you should probably do your AI in the Cloud at that point. Charlie Boyle, Nvidia: For the majority of our customers, they have a fair amount of data that they want to use that's central to their enterprise. It's in a 10 year data store they have, it's their CRM record for the past 20 years, it's customer behavior from the last five and that's generally somewhere on prem. When I say on prem, that doesn't mean they own their own data centers. Lots of them are in co-los. It's in a facility that they control the data on. Charlie Boyle, Nvidia: We always advise people move the AI compute to your data, move it as close as possible because you're going to spend so much time pulling the data into the AI infrastructure to do training, to do inference, that it doesn't make sense to keep moving it back and forth between the Cloud and on prem or vice versa. Charlie Boyle, Nvidia: From an IT administrator's perspective, one of the ... The kind of two scenarios that come up often as I talk to higher level executives in companies is their users are starting to work in the Cloud and train on the Cloud, lots of times they'll come back and say, "Wow. I got my quarterly bill and it was pretty high. I didn't expect that" because AI has an unlimited appetite for compute so as an IT administrator to get ahead of it, if your users are starting out on the Cloud, educate them on Cloud usage, make sure when they're done using something, turn the instance off. If your job is going to take eight hours and it's going to end at 5:06, don't wait until the morning to turn it off because you're burning valuable time. Charlie Boyle, Nvidia: The on premise side, one of the things that we've really tried to do with the DGX platform is make it easy for IT. At the end of the day, people look at the systems and they say, "Wow. This is a big system. It's six rack units. It's five to six kilowatts. This is something I'm not used to." At the end of the day, it's a Linux box. Positioning and the way people use our systems are delivering a complete solution but to an IT administrator, it's just a bigger Linux box. We use standard Linux distributions. We do all the software QA on all of the additional things that we put above the Linux layer on and we fully support that to the IT department but they shouldn't be scared of it. Whether they buy the DGX, they buy an OEM server platform using Nvidia GPUs, they can operate it the same way. [crosstalk 00:35:07]. Yevgeniy Sverdlik, Data Center Knowledge: If I'm the data center manager and, say, this is being deployed in my on prem data center, should I be scared of it? Maybe there's a power density [crosstalk 00:35:17]. Charlie Boyle, Nvidia: You absolutely shouldn't be scared of it. That's just planning. One of the things that we've talked about and you may have seen some of our material out there is empty is the new green. When people say, "Oh, one of these DGX boxes or one of these OEM servers is five kilowatts and I only have 10 kilowatts available in my rack or I have 12 kilowatts or seven", they say, "I've got all this empty space." Charlie Boyle, Nvidia: As a long-term data center guy myself, you look at empty space and you're like, "I'm not using my space well" but the amount of acceleration and computation that you get out of these systems replaces so many of your standard [inaudible 00:36:01] pizza boxes that you think back four or five years ago, we were making comparisons to one DGX 1 replaces a few hundred Intel pizza box servers. We don't even make those comparisons anymore because the number is into the thousands at this point. Charlie Boyle, Nvidia: If you're doing AI training and even AI inference, you're using GPUs, the systems are going to be more powerful but it's not that it's something completely different. It's just I've consolidated multiple racks worth of infrastructure into one server. Okay, yeah. That one server takes up more power but with good planning, that doesn't really matter to people as long as you can fit at least one of these systems in a rack. Charlie Boyle, Nvidia: Like I said, that's why we have such great OEM partners as well. We only build one DGX system. It's turned up to 11. It's the best. Our OEM partners do build smaller systems in cases where people need different density, different power levels, but regardless of whether you buy mine or whether you buy an OEM server, as an IT person you absolutely shouldn't be scared of this. If you're scared of it then you haven't looked into it enough. Call us. Like I said, even if you don't buy mine, we'll still give you the advice on what the right way to do it is. Yevgeniy Sverdlik, Data Center Knowledge: Do you run into IT people who are "scared" of it? Charlie Boyle, Nvidia: Four years ago, yes. I'm seeing a lot less of that now. I think people understand it well enough. Does an IT person need to know NV Link gen three? No. They're never going to program it, they're never going to touch it. Their users aren't going to program and touch it. It's software that we do. Charlie Boyle, Nvidia: Four or five years ago when we first introduced this box and it had eight InfiniBands on it and it was gold and it weighed 100 pounds and took three kilowatts and people were like, "I've never seen something like this before" but as AI has become more mainstream and they've looked into it, most IT people are like, "Yeah. It's running standard Linux" and that's one of the things we've done in the product over time when we first launched the product, we had our own OS, which was [inaudible 00:38:16] a bunch of Nvidia-specific stuff and that was because we just wanted a simple experience for all of our end users and because of a lot of feedback from IT organizations, they said, "Even though we get it, it's the best performance, it's everything, Nvidia has done all the work together but we're really a Red Hat shop and we're a one Linux shop. How are you going to help us?" Charlie Boyle, Nvidia: In year two, we introduced Red Hat support and that's just as good, obviously. You'd have to have a Red Hat license for that but that's something we've made easy for users. We continue to put out new software, new scripts to make setup and configuration easy so after you get past the power issue, a lot of people say, "You've got InfiniBand. I don't run InfiniBand in my data center." Well, InfiniBand is just the compute fabric to connect the DGX together. You don't have to manage it at all. It's a physical layer connection. It shouldn't be scary for you. It's just part of the solution. Charlie Boyle, Nvidia: Once people look at that, and like I said, four years ago, three years ago there was definitely more concern with people. I spent a lot more time explaining this to IT folks and after you explain it to them one on one, they're like, "Oh, yeah. I get it. It's not that difficult." A lot of it is because we've published papers, published information on how we do it and we do it with very few IT staff. We don't have a massive IT organization running these 3000 systems. For the amount of compute power those things put out to the amount of IT staff [crosstalk 00:39:46]. Yevgeniy Sverdlik, Data Center Knowledge: How big is that organization? Charlie Boyle, Nvidia: I don't know off the top of my head but it's smaller than you would expect for an organization running thousands of high end servers. At the end of the day, they're all the same. That's what really helps our customers with our design is that every single DGX that I've ever shipped out of the factory, I have hundreds, if not thousands, of those exact same systems at the exact same software level running internally. Charlie Boyle, Nvidia: As an IT administrator, the number one thing that caused me pain in the past and I'm sure causes my brethren pain today is when you've got lots of servers that are different, fundamentally different like I can't use the same S bios, I can't use the same BMC settings on these because some application wants something different. Every single DGX we ship is exactly the same, can be updated the same. To operate them, if you take a very big picture view, it's an IT administrator's dream. It's a completely homogenous system. Charlie Boyle, Nvidia: Now users and applications on top of that, we all deal with those things but from basic IT, it's actually fairly simple to update. We put a ton of engineering into making the system easy to update. We published a single package that updates absolutely everything on the system, all the farmer or the S bios, all the settings that you need, so you're not sitting there like, "Oh, I've got to run this patch, this other patch. Do I need this patch?" It's just one update container we give you a few times a year and it just goes [crosstalk 00:41:23]. Yevgeniy Sverdlik, Data Center Knowledge: Let's talk about the future a little bit. Companies have been able to get away, as you've mentioned, with hosting AI hardware in the data centers or in co-los they have without adding fancy things like liquid cooling, by basically just spreading the hardware across the data center floor, if they have enough data center floor. Do you see a point in the future where that will no longer be an option and the hardware will use so much power that it will have to be liquid cooled no matter how much data center space you have available to you? Charlie Boyle, Nvidia: For the foreseeable future, I believe we'll always have an air option. There will be some designs that will be more optimal for liquid cooling but that's part of my job, the team's job is to give customers the right type of designs for where the market is. Charlie Boyle, Nvidia: Right now, we don't sell any liquid cool designs. In the future, I definitely see that but it doesn't mean you have to go back to the old mainframe days. I can tell you the very first data center back in the mid '90s that I was retrofitting to do hosting, when we pulled up the floor tunnels there were a ton of liquid pipes because it used to be an old mainframe data center. Then the world moved away from that because that was a very complicated infrastructure but liquid cooling has come a long way and so I think in the future, people will have a lot of viable options, whether you just want to do air-only, mixed air and liquid, or facility liquid. Charlie Boyle, Nvidia: I see things coming forward where you can have a very high powered server go into a local liquid loop heat exchange or somewhere else in your data center without needing to retrofit your entire data center for facility water. I would advise most customers, if you were building your own facility moving forward and you're not going to go co-lo, you should at least have a plan for some liquid ... Whether you're using our stuff or not, I think that is going to be a way to get greater efficiency in your cooling but it is a fairly big uplift if it's the first time you're doing it or if you're trying to retrofit something, it's a big uplift. If you're planning it in from day one, it's not actually that expensive to do it. Yevgeniy Sverdlik, Data Center Knowledge: Words of wisdom from Charlie Boyle, guys. If you're building a data center today, make sure it can do liquid cooling. Charlie Boyle, Nvidia: Yeah. That doesn't necessarily mean you've got to run all that infrastructure right now but, like I said, we've got a great set of co-location partners around the world. We've been working with them for years now. I think almost all of them will have some liquid cooling capabilities in the future but in a lot of cases, it's in a plan meaning when I need it, I put the chillers here, I put the pipes there, I don't need to rip up everything. Charlie Boyle, Nvidia: Even as an end user if you were building your next large corporate data center, have a plan that you can accommodate space for that type of equipment. It doesn't mean that you've got to spend the capital infrastructure to do it today but you'd feel bad in three years and your data center is half full and you've realized you need liquid cooling and you've got to rip up a bunch of stuff. It's probably better to start that planning. Charlie Boyle, Nvidia: There's lots of experts across the industry if you're thinking about putting in liquid cooling or partial liquid cooling because for a lot of customers, they probably won't need to have liquid cooling everywhere in their data center because the world is not going to be there for a very long time but you'll need pockets of it in places. Figure out a plan in your own mind of, "Okay, I'm going to dedicate this corner of my data center because that's closest to the wall to do liquid cool infrastructure." It's common sense things like that. Yevgeniy Sverdlik, Data Center Knowledge: That's one of the reasons if you like co-lo is really the natural choice for all of this stuff. They give you the quality infrastructure. You don't have to manage in most cases. They ensure you have expansion head room. They have lots of locations and all of that is built into their business model. Charlie, do you think co-location facilities will be the primary way enterprises will house their AI infrastructure in the future for all those reasons? Charlie Boyle, Nvidia: You know, I would think so in a lot of ways. Just like our own journey at Nvidia, we're doing very well as a company. We have the capital, we can build our own data centers but we go to co-lo. Why do we do that? Because it's their core business to do these things. Charlie Boyle, Nvidia: Now we do push them and we give them requirements and really high powered targets because we want to push limits but they are the experts in this and we work hand in hand with our partners so that they understand where we are today and targets where we may be in a few years so [inaudible 00:46:12] plan for those things. Charlie Boyle, Nvidia: I think a lot of folks have ... If you've got a corporate data center today, you should always think about what do I need physically close to me? Like I said, a number of our DGX systems are actually right around our headquarters inside of Santa Clara but then we have other stuff that's further away and the stuff that's further away is stuff we just know we never have to physically touch that often. Charlie Boyle, Nvidia: As a corporate user, you should always look at what are the things that IT needs to touch a bunch? Keep that close. What are the things that I need great performance and resilience and all the great things that co-los bring me that I need to be able to drive to and what are the things that I should cost optimize to say this is tried and true technology, I don't mind if it's a state away or two states away or three states away because I don't need to touch it that often and the co-location provider has fabulous services. If I'm ever in a state that I do need someone to physically touch the box, that's standard in co-lo today. You don't have to ask for those services extra. That's usually built into your contracts.

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links