Jump to content
Facebook Twitter Youtube

Search the Community

Showing results for tags 'technology'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Hosting & Development
    • [CSBD] Discord
    • Rules, Feedback & Suggestions
    • Community development
    • Frequently Asked Questions
    • Staff & Projects Apply
    • Report center
  • Public Servers
    • Counter-Strike 1.6
  • Projects & Competitions
    • Devil Harmony
    • Social Musician
    • Music
    • Media
  • Devil's Club
    • Journalist
    • Social
    • Special days
    • Free time
  • Design
    • GFX Designers
    • Assistance
    • Galleries & Gifts
    • Competitions
  • World of Games
    • VGame Reviewers
    • Game Platform
    • Technology Era
    • Social Media
    • Offers, recommendations & giveaways

Product Groups

  • CSBD PREMIUM
  • CSBD HEAVENLY
  • CSBD STAFF RANK

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Facebook


Yahoo


Skype


Website URL


Twiter


Instagram


YouTube


Steam


Interests


City


Member Title

Found 13 results

  1. Researchers at Heidelberg University and University of Bern have recently devised a technique to achieve fast and energy-efficient computing using spiking neuromorphic substrates. This strategy, introduced in a paper published in Nature Machine Intelligence, is a rigorous adaptation of a time-to-first-spike (TTFS) coding scheme, together with a corresponding learning rule implemented on certain networks of artificial neurons. TTFS is a time-coding approach, in which the activity of neurons is inversely proportional to their firing delay. "A few years ago, I started my Master's thesis in the Electronic Vision(s) group in Heidelberg," Julian Goeltz, one of the leading researchers working on the study, told TechXplore. "The neuromorphic BrainScaleS system developed there promised to be an intriguing substrate for brain-like computation, given how its neuron and synapse circuits mimic the dynamics of neurons and synapses in the brain." When Goeltz started studying in Heidelberg, deep-learning models for spiking networks were still relatively unexplored and existing approaches did not use spike-based communication between neurons very effectively. In 2017, Hesham Mostafa, a researcher at University of California—San Diego, introduced the idea that the timing of individual neuronal spikes could be used for information processing. However, the neuronal dynamics he outlined in his paper were still quite different from biological ones and thus were not applicable to brain-inspired neuromorphic hardware. "We therefore needed to come up with a hardware-compatible variant of error backpropagation, the algorithm underlying the modern AI revolution, for single spike times," Goeltz explained. "The difficulty lay in the rather complicated relationship between synaptic inputs and outputs of spiking neurons." Initially, Goeltz and his colleagues set out to develop a mathematical framework that could be used to approach the problem of achieving deep learning based on temporal coding in spiking neural networks. Their goal was to then transfer this approach and the results they gathered onto the BrainScaleS system, a renowned neuromorphic computing system that emulates models of neurons, synapses, and brain plasticity. "Assume that we have a layered network in which the input layer receives an image, and after several layers of processing the topmost layer needs to recognize the image as being a cat or a dog," Laura Kriener, the second lead researcher for the study, told TechXplore. "If the image was a cat, but the 'dog' neuron in the top layer became active, the network needs to learn that its answer was wrong. In other words, the network needs to change connections—i.e., synapses—between the neurons in such a way that the next time it sees the same picture, the 'dog' neuron stays silent and the 'cat' neuron is active." The problem described by Kriener and addressed in the recent paper, known as the 'credit assignment problem," essentially entails understanding which synapses in a neural network are responsible for a network's output or prediction, and how much of the credit each synapse should take for a given prediction. To identify what synapses were involved in a network's wrong prediction and fix the issue, researchers often use the so-called error backpropagation algorithm. This algorithm works by propagating an error in the topmost layer of a neural network back through the network, to inform synapses about their own contribution to this error and change each of them accordingly. When neurons in a network communicate via spikes, each input spike 'bumps' the potential of a neuron up or down. The size of this 'bump' depends on the weight of a given synapse, known as 'synaptic weight." "If enough upward bumps accumulate, the neuron 'fires'—it sends out a spike of its own to its partners," Kriener said. "Our framework effectively tells a synapse exactly how to change its weight to achieve a particular output spike time, given the timing errors of the neurons in the layers above, similarly to the backpropagation algorithm, but for spiking neurons. This way, the entire spiking activity of a network can be shaped in the desired way—which, in the example above, would cause the 'cat' neuron to fire early and the 'dog' neuron to stay silent or fire later." Due to its spike-based nature and to the hardware used to implement it, the framework developed by Goeltz, Kriener and their colleagues exhibits remarkable speed and efficiency. Moreover, the framework encourages neurons to spike as quickly as possible and only once. Therefore, the flow of information is both quick and sparse, as very little data needs to flow through a given neural network to enable it to complete a task. "The BrainScaleS hardware further amplifies these features, as its neuron dynamics are extremely fast—1000 times faster than those in the brain—which translates to a correspondingly higher information processing speed," Kriener explained. "Furthermore, the silicon neurons and synapses are designed to consume very little power during their operation, which brings about the energy efficiency of our neuromorphic networks." The findings could have important implications for both research and development. In addition to informing further studies, they could, in fact, pave the way toward the development of faster and more efficient neuromorphic computing tools. "With respect to information processing in the brain, one longstanding question is: Why do neurons in our brains communicate with spikes? Or in other words, why has evolution favored this form of communication?" M. A. Petrovici, the senior researcher for the study, told TechXplore. "In principle, this might simply be a contingency of cellular biochemistry, but we suggest that a sparse and fast spike-based information processing scheme such as ours provides an argument for the functional superiority of spikes." The researchers also evaluated their framework in a series of systematic robustness tests. Remarkably, they found that their model is well-suited for imperfect and diverse neural substrates, which would resemble those in the human cortex, where no two neurons are identical, as well as hardware with variations in its components. "Our demonstrated combination of high speed and low power comes, we believe, at an opportune time, considering recent developments in chip design," Petrovici explained. "While on modern processors the number of transistors still increases roughly exponentially (Moore's law), the raw processing speed as measured by the clock frequency has stagnated in the mid-2000s, mainly due to the high power dissipation and the high operating temperatures that ariseas a consequence. Furthermore, modern processors still essentially rely on a von-Neumann architecture, with a central processing unit and a separate memory, between which information needs to flow for each processing step in an algorithm." In neural networks, memories or data are stored within the processing units themselves; that is, within neurons and synapses. This can significantly increase the efficiency of a system's information flow. As a consequence of this greater efficiency in information storage and processing, the framework developed by this team of researchers consumes comparatively little power. Therefore, it could prove particularly valuable for edge computing applications such as nanosatellites or wearable devices, where the available power budget is not sufficient to support the operations and requirements of modern microprocessors. So far, Goeltz, Kriener, Petrovici and their colleagues ran their framework using a platform for basic neuromorphic research, which thus prioritizes model flexibility over efficiency. In the future, they would like to implement their framework on custom-designed neuromorphic chips, as this could allow them to further improve its performance. "Apart from the possibility of building specialized hardware using our design strategy, we plan to pursue two further research questions," Goeltz said. "First, we would like to extend our neuromorphic implementation to online and embedded learning." For the purpose of this recent study, the network developed by the researchers was trained offline, on a pre-recorded dataset. However, the team would like to also test it in real-world scenarios where a computer is expected to learn how to complete a task on the fly by analyzing online data collected by a device, robot or satellite. "To achieve this, we aim to harness the plasticity mechanisms embedded on-chip," Goeltz explained. "Instead of having a host computer calculate the synaptic changes during learning, we want to enable each synapse to compute and enact these changes on its own, using only locally available information. In our paper, we describe some early ideas towards achieving this goal." In their future work, Goeltz, Kriener, Petrovici and their colleagues would also like to extend their framework so that it can process spatiotemporal data. To do this, they would need to also train it on time-varying data, such as audio or video recordings. "While our model is, in principle, suited to shape the spiking activity in a network in arbitrary ways, the specific implementation of spike-based error propagation during temporal sequence learning remains an open research question," Kriener added.
  2. Cheap Xbox Series X|S storage upgrades possible with conversion adapter. A Chinese company has released a conversion adapter that lets you install select M.2-2230 SSDs into the Microsoft's Xbox Series X and S game console expansion slots, reports Hermitage Akihabara. The tiny device enables cheap storage upgrades for Microsoft's latest gaming machines and breaks Seagate's monopoly on Xbox Series X and S storage expansion cards. Sintech's DIY CFexpress Card PA-CFEM2-C conversion adapter can house an M.2-2230 NVMe SSD and connect it to a CFexpress Type-B interface. The unit is marketed specifically for Microsoft's latest game consoles. However, you can also use it to make your own CFexpress Type-B card for use with professional DSLR cameras and appropriate card readers. The adapter costs $29.99. The adaptor has a major limitation, though. While it can house any short M.2-2230 drive with a PCIe interface, the consoles are only compatible with select SSDs featuring a specific firmware and internal format. For example, Western Digital's WD Blue CH SN530 is naturally compatible with Microsoft's consoles, but the WD Blue PC SN530 is not. This could be why you can't use typical CFexpress 1.0 Type-B cards to expand the storage in Microsoft's consoles. Unfortunately, it's currently unclear how many SSDs are on the market that fit these specific requirements. Microsoft's latest Xbox Series X|S game consoles use proprietary storage expansion cards that come in CFexpress 1.0 Type-B form-factor and use two PCIe Gen4 lanes (as opposed to two PCIe Gen3 lanes mandated by the CFexpress 1.0 Type-B specification). Since these cards are currently only made by Seagate, they are quite expensive — they currently cost around $220 for a 1TB version. However, as proven by an enthusiast, it is possible to build an expansion drive for the latest Xboxes using a CFexpress to M.2-2230 adapter (which was designed to build higher-capacity storage devices for cameras) and a compatible SSD.
  3. When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These "superhuman" AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people? In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it had never met before. In single-blind experiments, participants played two series of the game: One with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way. The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS). "It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred," says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. "It may seem those things are so close that there's not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those." Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges—like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning. A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical "reward" by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI aren't programmed to follow "if/then" statements, because the possible outcomes of the human tasks they're slated to tackle, like driving a car, are far too many to code. "Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data," Allen says. "The sky's the limit in what it could, in theory, do." Bad hints, bad plays Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades. The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next. The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents. "That was an important result," Allen says. "We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and they'll also do very well. That's why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally we'll like something better if we do well." Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games. "One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache," says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. "Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays." Inhuman creativity This perception of AI making "bad plays" links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMind's AlphaGo first defeated one of the world's best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as "genius." Such moves might be praised when an AI opponent performs them, but they're less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans' trust in their AI teammate in these closely coupled teams. Such moves not only diminished players' perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasn't immediately obvious. "There was a lot of commentary about giving up, comments like "I hate working with this thing,'" adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group. Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts. "Let's say you train up a super-smart AI guidance assistant for a missile defense scenario. You aren't handing it off to a trainee; you're handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, it's likely going to show up in real-world ops," he adds. Squishy humans The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem—not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance. If researchers don't focus on the question of subjective human preference, "then we won't create AI that humans actually want to use," Allen says. "It's easier to work on AI that improves a very clean number. It's much harder to work on AI that works in this mushier world of human preferences." Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratory's Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality. The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year. "You can imagine we rerun the experiment, but after the fact—and this is much easier said than done—the human could ask, 'Why did you do that move, I didn't understand it?' If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, 'Oh, weird way of thinking about it, but I get it now,' and they'd trust it. Our results would totally change, even though we didn't change the underlying decision-making of the AI," Allen says. Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team. "Maybe it's also a staffing bias. Most AI teams don't have people who want to work on these squishy humans and their soft problems," Siu adds, laughing. "It's people who want to do math and optimization. And that's the basis, but that's not enough." Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.
  4. New info on the initial DDR5 offerings By now, it's common knowledge that Intel's 12th Generation Alder Lake will be the first mainstream processor platform to embrace DDR5 memory. The chipmaker (via momomo_us) has released a new document that lists the different DDR5-4800 memory modules that have been validated for its next-generation platform. Perhaps one of these models will land the first DDR5 spot on our best RAM list. Instead of validating the different DDR5 products itself, Intel delegated the arduous task to Advanced Validation Labs, Inc (AVL), a renowned specialist in testing and validating memory during the pre- or post-production phase. The company specifically concentrated on DDR5-4800 memory, which is the baseline standard for Alder Lake. These are non-ECC memory modules that stick to JEDEC's guidelines, including a 1.1V DRAM voltage and mediocre 40-39-39 timings. AVL tested memory modules from big-name vendors, such as SK hynix, Sasmsung, Micron, Crucial and Kingston. While the data rate remains the same for all the candidates, the capacities vary between 8GB and 32GB per memory module. According to the Intel document, DRAM manufacturers will start with 16-gigabit DDR5 RAM chips. so there's enough headroom to work up to the capacity that they want to offer for each individual memory module. Advertisement One of the novelties with DDR5 is the onboard voltage regulation, which is achieved by equipping the memory module with a power management integrated circuit (PMIC). As far as the initial DDR5 memory modules are concerned, they'll leverage a PMIC from Renesas. The document didn't specify the exact model of the PMIC. However, we think it might be the P8911, which is an optimized version of the P8900 that Renesas designed for server memory. SK hynix, Samsung and Micron are IC manufacturers, so naturally they'll utilize their own ICs in their DDR5 products. Kingston, on the other hand, will tap SK hynix for its ICs/ Meanwhile Crucial, which is Micron's consumer brand, will utilize the latter's ICs. If we look at the ICs, it would seem that SK hynix and Micron will be bringing their respective M-die and A-dies to DDR5. These scale good enough with higher voltages, but they aren't exactly recognized for operating with tight timings. That's where Samsung' B-die ICs excelled back in the DDR4 days. The document confirms that Samsung's DDR5 ICs are Revision B, so these should be B-die. If the DDR5 B-dies are anything like the previous DDR4 B-dies, they'll probably become the de facto ICs for overclockers again. Apparently, the recipe doesn't vary with 8GB and 16GB memory modules, regardless of the brand. The companies will stick with a single-rank design, 1Rx16 for 8GB and 1Rx8 for 16GB. In comparison, 16GB DDR4 used to be a guarantee for dual-rank in the beginning. Eventually, many memory brands have transitioned to a single-rank design thanks to the introduction of higher-density chips. With DDR5, however, 32GB memory modules are the only surefire ticket for a dual-rank (2Rx8) layout. Why does the above matter? Dual-rank memory is typically faster than single-rank memory, although not in all workloads. Both Intel's Core and AMD's Ryzen processors benefit from dual-rank memory, and tests have shown that four memory ranks is the ideal configuration for maximum performance. It remains to be seen whether Alder Lake favors the same setup, though.
  5. Tesla says it delivered 241,300 electric vehicles in the third quarter even as it wrestled with a global shortage of computer chips that has hit the entire auto industry. The Palo Alto, California, company's sales from July through September beat Wall Street estimates of 227,000 sales worldwide, according to data provider FactSet. Third-quarter sales rose 72% over the 140,000 deliveries Tesla made for the same period a year ago. So far this year, Tesla has sold around 627,300 vehicles. That puts it on pace to soundly beat last year's total of 499,550. Wedbush analyst Daniel Ives wrote in a note to investors that the pace of electric vehicle deliveries in the U.S. and China has been strong for the past month or so. That means an "eye-popping growth trajectory heading into 4Q and 2022 for (CEO Elon) Musk & Co." Still, Ives estimated that the chip shortage will knock 40,000 vehicles from Tesla's annual delivery number. He estimates the deliveries to be at least 865,000 vehicles, with a bull case of around 900,000. "In a nutshell, with chip shortage headwinds, China demand still recovering from earlier this year, and EV competition coming from all angles, Tesla's ability to navigate these challenges this quarter have been very impressive," he wrote. In the third quarter, the smaller Model 3 sedan and Y SUV led the way with 232,025 sales, followed by the larger Models S and X at 9,275. Tesla said it produced 237,823 vehicles for the quarter.
  6. Razer Kiyo X webcam costs $79.99 (roughly Rs. 5,900). Razer Kiyo X webcam and Ripsaw X capture card were launched on Friday, October 1 as entry-level offerings for video streaming. The webcam offers 1080p (full-HD) video recording at 30fps or 720p at 60fps. The Ripsaw X is a plug-and-play capture card that can handle video feeds of up to 4K at 30fps. The Razer Kiyo X USB webcam is equipped with autofocus functionality and offers options for customisation. Razer has provided support for Windows 10 on the Kiyo X, while Ripsaw X card offers HDMI 2.0 and USB 3.0 connectivity. Razer Kiyo X, Razer Ripsaw X price, availability Razer Ripsaw X was launched in the US with a price tag of $139.99 (roughly Rs. 10,300). Razer Kiyo X webcam is priced at $79.99 (roughly Rs. 5,900). Both the products are now available for purchase in the US via Razer's official website. Razer said that it is providing up to one year of warranty and tech support for the new webcam. The company adds that customers purchasing the Kiyo X webcam directly from its online store will be eligible to get 14 days of risk-free returns. There's no word on the new device's India availability as of yet. Razer Kiyo X webcam specifications, features The Razer Kiyo X webcam can stream at 1080p resolution at 30fps, or 720p at 60fps. The webcam comes with Razer Virtual Ring Light software, which utilises the PC monitor as a source of illumination for optimised lighting during streaming and video calls. The webcam comes with autofocus and fully customisable settings. The webcam is compatible with computers running Windows 10 with 64-bit support, Open Broadcaster Software, and Xsplit. Razer says the Kiyo X works with the company's Synapse 3 and is compatible with po[CENSORED]r streaming programs. It offers an 82-degree field of view and a 1,920x1,080 pixels still image resolution. The mounting options include an L-shaped joint and Tripod, but Razer is not providing these accessories with the Kiyo X webcam. The Razer Kiyo X webcam offers USB 2.0 connectivity bundled with a 1.5 meters braided cable that can be used to connect the webcam with a PC. Razer Ripsaw X capture card specifications, features The new compact, dedicated capture card Razer Ripsaw X can capture feeds at up to 4K resolution. It offers 3840p video recording resolutions at 60fps or 1920p at 120fps. It features HDMI 2.0 and USB 3.0 connectivity. With Razer Ripsaw X capture card, streamers can send camera footage straight to the hard drive or direct it to a streaming software for livestreaming. It works with Open Broadcaster Software and Streamlabs. Users can turn their DSLR or hand-held camera into a webcam with the Razer Ripsaw X. Razer says it is compatible with models of brands including Nikon, Canon, Sony, Panasonic, Fujifilm, and GoPro.
  7. in its first big move to expand into a software company, General Motors is introducing a new software platform it created called Ultifi. The automaker will begin putting Ultifi (all-tee-fy) on some internal combustion and electric vehicles starting with the 2023 model year with the hope that it helps boost consumer loyalty to GM cars and opens up new channels to revenue beyond car sales. "Ultifi is a big, big step in our software strategy," Scott Miller, GM's vice president of software-defined vehicles said Wednesday. "Today, cars are enabled by software, with Ultifi, cars will be defined by it." Last week, Alan Wexler, GM's senior vice president of innovation and growth, announced that GM has a new business model that extends beyond the hardware of building cars, to becoming a software platform innovator. Wexler said GM's vehicles will merely be a platform to deliver GM-developed software to offer consumers services beyond driving. Those services can then be used in their homes and other areas of their lives. Wexler called GM's new business model, "a potential game-changer for delivering subscription services that create recurring revenue." Ultifi is the first step in GM's new business model, Miller said. It builds on GM's current vehicle intelligence platform (VIP). Think of VIP as a smartphone and Ultifi as the operating system that provides the functions. Ultifi holds the potential for more cloud-based services, faster software development and new ways GM can increase customer loyalty. "At our core, we're going to make great cars, trucks and vehicles," Miller said. "What we're talking about is adding a platform with Ultifi. (Customers) will love it when they buy it, but they'll love it even longer as it gets better. When the next new thing comes out they can add it to their vehicle and not have to go buy a new car so this improves the relationship with them." Similar to software on a smartphone, Ultifi can provide regular updates and let customers choose from a variety of over-the-air upgrades, personalization options and apps. For example, imagine a camera inside your car that recognizes your face and starts the engine for you. Or, the camera can detect if there's a child in the back seat. Miller said those services would not be subscription. Another example would be a weather forecast with the ability to close a vehicle's windows if it's parked in an area where it's expect to rain, Miller said. Or an alert that warns drivers of specific icy spots on roads. GM will open Ultifi up to allow third-party developers to create content for it and there will be the chance to add subscription offerings, for added revenue for GM. But Miller declined to say how much revenue GM expects it will generate. A GM spokesman said the automaker will discuss revenue in more depth next week, most likely at its Investor Day on Oct. 6. But Miller said GM will work out a revenue sharing formula with third-party software providers, noting, "They're not going to come to our platform for free, but we're not going to give up our platform for free either, so that's a solvable issue we'll address in the future." A customer will buy a car with Ultifi on it and then choose various plans with it or levels in terms of the number of upgrades they will get, the kinds of services and software or apps they want to access. "The key thing about Ultifi is we like to call it continuous integration," Miller said. "We're separating the software from the hardware so we can continuously upgrade apps. It will allow us to be very agile and constantly learn how to make it better."
  8. The future of package delivery, taxis, and even takeout in cities may be in the air—above the gridlocked streets. But before a pizza-delivery drone can land safely on your doorstep, the operators of these urban aircraft will need extremely high-resolution forecasts that can predict how weather and buildings interact to create turbulence and the resulting impacts on drones and other small aerial vehicles. While scientists have been able to run simulations that capture the bewilderingly complex flow of air around buildings in the urban landscape, this process can take days or even weeks on a supercomputing system—a timeline far too slow (and a task far too computationally expensive) to be useful to daily weather forecasters. Now, scientists at the National Center for Atmospheric Research (NCAR) have demonstrated that a new kind of model built entirely to run on graphical processing units, or GPUs, has the potential to produce useful, street-level forecasts of atmospheric flow in urban areas using far fewer computing resources and on a timeline that makes real-time weather forecasting for drones and other urban aircraft plausible. In a study published recently in the journal AGU Advances, the NCAR team describes the use of a microscale model called FastEddy to simulate atmospheric conditions in downtown Dallas. "GPUs have really matured in recent years, and they hold a lot of potential to accelerate modeling," said NCAR scientist Domingo Muñoz-Esparza, lead author of the study and one of the principal model developers. "To take maximum advantage of GPUs, we built FastEddy from scratch." This study was funded by the National Science Foundation, which is NCAR's sponsor, the Defense Threat Reduction Agency, Uber Elevate, and NASA. The simulations used for the study were run on the Casper system at the NCAR-Wyoming Supercomputing Center. Traditional weather forecasts are often run at a resolution of about 10 to 15 kilometers (six to nine miles), meaning that anything smaller than that—buildings, streets, and any of the other complexities of the urban landscape—are not directly captured. Even high-resolution weather models are run with a spacing of 3–4 kilometers (1.8–2.5 miles) between grid points, which can reduce entire towns to a handful of pixels. The future of package delivery, taxis, and even takeout in cities may be in the air—above the gridlocked streets. But before a pizza-delivery drone can land safely on your doorstep, the operators of these urban aircraft will need extremely high-resolution forecasts that can predict how weather and buildings interact to create turbulence and the resulting impacts on drones and other small aerial vehicles. While scientists have been able to run simulations that capture the bewilderingly complex flow of air around buildings in the urban landscape, this process can take days or even weeks on a supercomputing system—a timeline far too slow (and a task far too computationally expensive) to be useful to daily weather forecasters. Now, scientists at the National Center for Atmospheric Research (NCAR) have demonstrated that a new kind of model built entirely to run on graphical processing units, or GPUs, has the potential to produce useful, street-level forecasts of atmospheric flow in urban areas using far fewer computing resources and on a timeline that makes real-time weather forecasting for drones and other urban aircraft plausible. In a study published recently in the journal AGU Advances, the NCAR team describes the use of a microscale model called FastEddy to simulate atmospheric conditions in downtown Dallas. "GPUs have really matured in recent years, and they hold a lot of potential to accelerate modeling," said NCAR scientist Domingo Muñoz-Esparza, lead author of the study and one of the principal model developers. "To take maximum advantage of GPUs, we built FastEddy from scratch." This study was funded by the National Science Foundation, which is NCAR's sponsor, the Defense Threat Reduction Agency, Uber Elevate, and NASA. The simulations used for the study were run on the Casper system at the NCAR-Wyoming Supercomputing Center. Traditional weather forecasts are often run at a resolution of about 10 to 15 kilometers (six to nine miles), meaning that anything smaller than that—buildings, streets, and any of the other complexities of the urban landscape—are not directly captured. Even high-resolution weather models are run with a spacing of 3–4 kilometers (1.8–2.5 miles) between grid points, which can reduce entire towns to a handful of pixels. The FastEddy model, on the other hand, can be efficiently run at a resolution of just 5 meters (16 feet), fine enough to accurately simulate the swirling eddies and other turbulent flow features that arise in the wakes of buildings and in street canyons. Other models, including NCAR's Weather Research and Forecasting Large Eddy Simulations (WRF-LES) modeling system, can also produce similarly high-resolution simulations, but they use vastly more computing resources. These traditional simulations are highly detailed and remain important for basic research, but they are not practical for day-to-day forecasting. WRF-LES and other similar models rely on more traditional computer chips, known as central processing units. CPUs excel at performing multiple tasks, including control, logic, and device-management operations, but their ability to perform fast arithmetic calculations is limited. GPUs are the opposite. Originally designed to render 3D video games, GPUs are capable of fewer tasks than CPUs, but they are specially designed to perform mathematical calculations very rapidly. To benefit from the increased speed offered by GPUs, NCAR and other modeling institutions are working to retrofit existing modeling code—including NCAR's Model for Prediction Across Scales, a global weather model—to partially use GPUs. The result can be more efficient and faster than the original versions, but there will always be lingering inefficiencies due to bottlenecks created by the CPUs in these hybrid approaches. To take full advantage of the promise of GPU acceleration, a model's code must be written so that all the model's calculations are performed by GPUs. FastEddy was written from the ground up, primarily by NCAR scientists Jeremy Sauer and Domingo Muñoz-Esparza, to do just this. The result is a model that has a prediction rate that is six times faster under similar power consumption—or a power consumption that is eight times lower for the same prediction rate—than an equivalent CPU model. "If you want to do microscale forecasting in real time, you need to be as fast as the GPUs can be," Muñoz-Esparza said. A change in the winds: Shifting building wakes For the new study, the scientists used FastEddy to simulate the urban weather in downtown Dallas for over 50 selected weather scenarios in 2018. The results confirmed the potential dangers of relying solely on significantly coarser-resolution weather forecasts for aerial operations in the "urban canopy." For example, the scientists found that in the afternoon, when Sun-heated air at the surface is rising, cooling, and falling again, creating a vertical circulation, the winds in the urban canopy at 26 meters (or 85 feet) above the ground tend to be aligned with large-scale background wind direction at the same height. But at night and in the early morning, when the atmosphere is more stable, the winds through the urban canopy are actually offset from the direction of the large-scale background winds. In fact, the wakes behind the buildings can shift up to 45 degrees clockwise from the direction of the incoming weather. Broadly, the modeling effort also showed how the weather in the urban canopy changes, on average, through the seasons, but also that individual days can display marked variation within the same month, and even within the same 24-hour period, underscoring the importance of real-time forecasts rather than relying on averages. Such forecasts could help aircraft operators determine whether they can safely accomplish their objectives, as well as how much battery charge will be needed. Turbulence in the urban canopy can cause batteries to drain as much three times faster than they otherwise would, according to Muñoz-Esparza, which could leave an aircraft stranded in the city. Beyond modeling turbulence and wind direction in the urban canopy, the FastEddy team is working on other possible applications for the model, including a new project to model how the sound produced by air taxis might propagate through a city. They are also working at adding more detail and more physics into the model. For the Dallas experiment, the model runs were kicked off using the output from a traditional weather model with a resolution of 3 kilometers (1.8 miles), including wind velocity and direction, as well temperature. FastEddy then took those large-scale variables and downsized them to simulate the microscale atmospheric flows. Now the FastEddy team is working to extend the model's capability by adding moist dynamics and clouds, which will make these microscale weather predictions even more realistic. In addition, the extraordinary efficiency of FastEddy makes it possible to run the model multiple times for the same period, a technique known as ensemble forecasting. This allows scientists to have a better understanding of how certain (or uncertain) a forecast is and to generally provide more robust and reliable guidance to aircraft operators. "We want to have the possibility down the road of a full weather forecast but at the microscale—something complete," Muñoz-Esparza said. "It's a really exciting time to be involved with this kind of GPU model. There's so much potential."
  9. Massive savings on a gaming laptop beast. It's fair to say that, when looking for the best RTX 3080 laptops available right now, nobody expects to pay less than $2,000. That’s why today's Newegg and MSI deal caught us off guard! At Newegg for today only, you can get $300 off the MSI GP66 Leopard and pick up this seriously powerful portable powerhouse for just $1,999. More: Best gaming laptops Best gaming laptop and PC deals Best gaming mice Newegg coupons Alongside its seriously stacked list of internal components, there's plenty of other great features on this laptop that we were quick to praise in our MSI GP66 Leopard review. The comfortable keyboard, replaceable components and subtle design make this a great unit for on-the-go-gaming. This laptop maintains optimal cooling with its Cooler Boost 5 thermal management, including six heat pipes and two fans that work harmoniously to maximize airflow. Plus, its variety of I/O, including both HDMI 2.1 and Mini DisplayPort 1.4, makes this easy to slot into any home setup. If you’re a PC gaming enthusiast looking for a great RTX 3080 laptop for less than $2000, this is the kind of deal that doesn't come around very often.
  10. Bitcoin price has massively fluctuated over the past year partly due to Chinese regulations. China's central bank on Friday said all financial transactions involving cryptocurrencies are illegal, sounding the death knell for the digital trade in China after a crackdown on the volatile currencies. The global values of cryptocurrencies including Bitcoin have massively fluctuated over the past year partly due to Chinese regulations, which have sought to prevent speculation and money laundering. Bitcoin price in India stood at Rs. 33.7 lakhs as of 5pm IST on September. "Virtual currency-related business activities are illegal financial activities," the People's Bank of China (PBOC) said in an online statement Friday, adding that offenders would be "investigated for criminal liability in accordance with the law." The notice bans all related financial activities involving cryptocurrencies, such as trading crypto, selling tokens, transactions involving virtual currency derivatives and "illegal fundraising". Bitcoin extended losses Friday after China's latest crackdown on cryptocurrencies. Bitcoin, which had already been falling before the announcement, dropped as much as 6.0 percent in value before trimming losses to stand at $42,256 (roughly Rs. 31 lakhs), down 5.5 percent. The central bank said that in recent years trading of Bitcoin and other virtual currencies had become "widespread, disrupting economic and financial order, giving rise to money laundering, illegal fund-raising, fraud, pyramid schemes, and other illegal and criminal activities." This was "seriously endangering the safety of people's assets," the PBOC said. While crypto creation and trading have been illegal in China since 2019, further crackdowns this year by Beijing warned banks to halt related transactions and closed much of the country's vast network of Bitcoin miners. Thursday's statement by the central bank sent the strongest yet signal that China is closed to crypto. Control Bitcoin, the world's largest digital currency, and other cryptos cannot be traced by a country's central bank, making them difficult to regulate. Analysts say China fears the proliferation of illicit investments and fundraising from cryptocurrency in the world's second biggest economy, which also has strict rules around the outflow of capital. The crypto crackdown also opens the gates for China to introduce its own digital currency, already in the pipeline, allowing the central government to monitor transactions. In June, Chinese officials said more than 1,000 people had been arrested for using the profits from crime to buy cryptocurrencies. Several key Chinese provinces banned the operation of cryptocurrency mines since the start of this year, with one region accounting for eight percent of the computing power needed to run the global blockchain - a set of online ledgers to record Bitcoin transactions. Bitcoin values tumbled in May on the back of a warning by Beijing to investors against speculative trading in cryptocurrencies.
  11. Google is updating critical features for the millions of drivers who depend on its technology to help them get around. The tech giant announced the upcoming changes Thursday to Google Assistant and Android Auto driving modes and a new automaker, Honda, will have Google technology installed in its vehicles. Google said that drivers using Google Assistant on Android phones will soon see a new dashboard they say will reduce "the need to fiddle with your phone while also making sure you stay focused on the road." Instead of scrolling while driving, Google said drivers could tap to see who just called or sent a text and have access to several apps to listen to music with the new dashboard. The dashboard will also include a new messaging update where drivers can say, "Hey Google, turn on auto-read," to hear their new messages read aloud when they come in and respond by voice. These new changes for drivers are apparently part of what Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog earlier this year to make its technologies "universally accessible and useful." For example, users of Android Auto, Google's smartphone app for vehicles, via their Android phones will now be able to see music, news, and podcast recommendations from Google Assistant and they can set which app launches whenever Android Auto starts. Those Android Auto users will soon be able to play games appearing on the vehicle's display with a new feature called GameSnacks while they're waiting or parking. Additionally, Android Auto and Android phone users can make contactless payments for gas using Google Pay. This feature is available at more than 32,000 gas stations across the U.S., including ExxonMobil, Conoco, 76 stations and Phillips 66. On Thursday, Google announced that Japanese automaker Honda will be the latest to have Google built-in technology in its vehicles beginning in 2022. Honda, which announced in April it's aiming to sell only electric vehicles in North America by 2040, will join the likes of Ford, General Motors, Polestar, Renault and Volvo that will have its future vehicles released with default Android operating systems. The Polestar 2 and Volvo XC40 Recharge are among the current models with Google's built-in tech.
  12. Say goodbye to incompatible and obsolete chargers! The European Commission has announced new plans for legislation today that will require all mobile devices, including smartphones, tablets, cameras, headphones, portable speakers and handheld video game consoles, to conform to the USB Type-C standard for charging. This goal is to reduce e-waste from a large number of differing chargers, and may also have the added benefit of ending those times when you can't find the right charger for your phone at a friend's house. The transition period will be 24 months once the European Parliament passes the new regulation. Fortunately, the move to Type-C has already become mainstream with most smartphones incorporate Type-C for charging already. The same can be said of tablets, USB headphones, and speakers as well. So once the legislation is set in motion, all major device manufacturers should be ready for it. What About the IPhone? However, this leaves one major player left to make the transition, iPhones. Apple is still using its home-brewed Lightning port, years after most smartphone manufacturers made the transition to Type-C. Of course, many Apple customers own these cables, as well as accessories that use the connector. MacRumors believe Apple has no plans to switch to type-C any time soon. If anything, we might see a completely wireless iPhone before Type-C comes to the phone. It's possible Apple can sell an adapter for future iPhones to allow charging with Type-C chargers to comply with the legislation, though we don't know for sure how that would work with the proposal. Apple could also accelerate its wireless plans and make all iPhones fully wireless by 2023. Or, Apple could simply compromise fully and change plans by making a Type-C iPhone. Fast Charging Uniformity: Along with the Type-C uniformity, the Commission is also proposing a harmonized fast charging technology that will be compatible with all Type-C chargers and devices to ensure charging speeds will be the same when charging any device. If we had to guess, this will probably be related to Qualcomm's Quick Charge technology which is already very po[CENSORED]r in the mobile landscape. The Commission is also proposing unbundling charging bricks from smartphone sales to further reduce e-waste. This is something Apple is already doing with its iPhones and Samsung is doing with Galaxy smartphones. The final proposal is to improve information pertaining to charging statistics for both devices and chargers. The Commission wants to make details perfectly clear on how fast your device charges and whether or not it supports fast charging. According to the Commission, over 11,000 tonnes of chargers are wasted every single year from incompatibility problems and consumer reports say that at least 38% of people have problems finding or buying the right charger in the first place. For now, this legislation only applies to the European Union, but we wouldn't be surprised to see other countries following suit with similar agendas down the line.
  13. -dell latitude E6420 XFR the start will be with this computer from DELL company wich it has a good reputation. the latitude E6420 XFR with he's 14-inch flat-screen have a "Directvue" option that can make you read in the sunlight. and you can also throw him from the roof or spill water on him, thanks to he's ballistic armor and primo seal wich give's it a protection in the entries and the special quad cool system that can manage Temperatures and can give protection from water and dusts and high Temperatures like in the deserts. Specifications: screen:14-inch wide-view LED (1366×768 pixels) processor:2.8GHz dual-core Intel Core i7-2640M RAM:4GB DDR3 Disk:128GB SSD graphic card:NVIDIA 4200M with 512MB DDR3 battery: more then 6 hours operating system: Windows 7 Professional price:3210$ -Panasonic Toughbook 31 panasonic has a wild history in this kind of devices more then 15 years old. the sixth generation from the best of what this company has produced is: Toughbook this computer literally can not be Refracted unless you set him on high explosive materials. Specifications: screen:13.1-inch XGA touchscreen LED (1024×768 pixels) processor:2.7GHz dual-core Intel Core i5-3340M RAM:4GB DDR3 disk:500GB 7200 rpm shock-mounted HDD graphic card:Intel HD 4000 Graphics battery: more then 13 hours (YES.!! you read it right,13HOURS) operating system:Windows 7 Professional or Windows 8 additional devices: WEBCAM,WIFI,bluetooth price:4039$ -Getac V110 Now this is the one that i liked it,it has a beautiful smooth design compared to the others. and also he's battery can go on to 13 hours, with 11.6 inch Touch screen. Specifications: screen: 11.6-inch multitouch TFT LCD (1366×768 pixels) processor: 9GHz dual-core Intel Core i5-4300U / 2.1GHz dual-core Intel Core i7-4600U RAM:4GB DDR3 Disk:128GB/256GB SSD graphic card:Intel HD 4400 Graphics battery:more then 13 hours operating system: Windows 8 Pro price: 3299$ -generally This computers comes expensive, but if you are looking for Toughness and power then this the best computers in the market for the moment.

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links