Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Everything posted by Ronaldskk.

  1. Video title: ¡Minecraft PERO hay FAROS NUEVOS! 😮⭐💥 SILVIOGAMER MINECRAFT PERO Content creator ( Youtuber ) : Silvio Gamer Official YT video:
  2. Nick movie: Heroico Time: 2023 Netflix / Amazon / HBO: N/A Duration of the movie: 1h 28m Trailer:
  3. https://www.gadgets360.com/internet/news/telecom-bill-ott-apps-not-covered-whatsapp-meta-signal-4728645 Over-the-top (OTT) apps or services will not be under the ambit of the newly passed Telecommunications Bill 2023, telecom minister Ashwini Vaishnaw told ET Telecom. The minister's statement comes days after Parliament passed the new telecom bill that replaces three older laws, including the including the 138-year-old Indian Telegraph Act. Provisions under the new bill reduce the powers of the Telecom Regulatory Authority of India (TRAI) and give the government unprecedented powers, including the ability to take over telecom services in the interest of national security. After the Telecommunications Bill (2023) was passed on Thursday, concerns were raised related to increased scrutiny and interference from the government, if OTT communication apps like WhatsApp and Signal were included under the ambit of the new telecommunications bill, that is awaiting the President's assent, before it becomes law. "[...]There is no coverage of OTT in the new telecom bill passed by the Parliament," the minister told the publication, explaining that these OTT apps are currently covered by the Information Technology Act, 2000 and will continue to be regulated by the same law that is overseen by the Ministry of Electronics and Information Technology (MeitY). Earlier this week, Meta reportedly expressed concerns over the telecom bill in an internal email to colleagues from Shivnath Thukral, Director and Head of India Public Policy at Meta. The revised version of the telecommunications bill that was passed by Parliament does not contain and references to OTT or OTT platforms, but mentions terms like ‘telecommunication services', 'messages,' and 'telecommunications identifier,' which could also apply to OTT platforms. The telecom bill is now waiting the President's assent before it becomes a law — it was approved in the Rajya Sabha through a voice vote on Thursday, a day after it was passed by the Lok Sabha. The bill is set to replace the Indian Telegraph Act of 1885, the Wireless Telegraphy Act of 1933, and the Telegraph Wires (Unlawful Possession) Act of 1950.
  4. https://techxplore.com/news/2023-12-google-gemini-ai-chatgpt.html Google Deepmind has recently announced Gemini, its new AI model to compete with OpenAI's ChatGPT. While both models are examples of "generative AI," which learn to find patterns of input training information to generate new data (pictures, words or other media), ChatGPT is a large language model (LLM) which focuses on producing text. In the same way that ChatGPT is a web app for conversations that is based on the neural network know as GPT (trained on huge amounts of text), Google has a conversational web app called Bard which was based on a model called LaMDA (trained on dialogue). But Google is now upgrading that based on Gemini. What distinguishes Gemini from earlier generative AI models such as LaMDA is that it's a "multi-modal model." This means that it works directly with multiple modes of input and output: as well as supporting text input and output, it supports images, audio and video. Accordingly, a new acronym is emerging: LMM (large multimodal model), not to be confused with LLM. In September, OpenAI announced a model called GPT-4Vision that can work with images, audio and text as well. However, it is not a fully multimodal model in the way that Gemini promises to be. For example, while ChatGPT-4, which is powered by GPT-4V, can work with audio inputs and generate speech outputs, OpenAI has confirmed that this is done by converting speech to text on input using another deep learning model called Whisper. ChatGPT-4 also converts text to speech on output using a different model, meaning that GPT-4V itself is working purely with text. Likewise, ChatGPT-4 can produce images, but it does so by generating text prompts that are passed to a separate deep learning model called Dall-E 2, which converts text descriptions into images. In contrast, Google designed Gemini to be "natively multimodal." This means that the core model directly handles a range of input types (audio, images, video and text) and can directly output them too. The distinction between these two approaches might seem academic, but it's important. The general conclusion from Google's technical report and other qualitative tests to date is that the current publicly available version of Gemini, called Gemini 1.0 Pro, is not generally as good as GPT-4, and is more similar in its capabilities to GPT 3.5. Google also announced a more powerful version of Gemini, called Gemini 1.0 Ultra, and presented some results showing that it is more powerful than GPT-4. However, it is difficult to assess this, for two reasons. The first reason is that Google has not released Ultra yet, so results cannot be independently validated at present. The second reason why it's hard to assess Google's claims is that it chose to release a somewhat deceptive demonstration video, see below. The video shows the Gemini model commenting interactively and fluidly on a live video stream. However, as initially reported by Bloomberg, the demonstration in the video was not carried out in real time. For example, the model had learned some specific tasks beforehand, such the three cup and ball trick, where Gemini tracks which cup the ball is under. To do this, it had been provided with a sequence of still images in which the presenter's hands are on the cups being swapped. Promising future Despite these issues, I believe that Gemini and large multimodal models are an extremely exciting step forward for generative AI. That's both because of their future capabilities, and for the competitive landscape of AI tools. As I noted in a previous article, GPT-4 was trained on about 500 billion words—essentially all good-quality, publicly available text. The performance of deep learning models is generally driven by increasing model complexity and amount of training data. This has led to the question of how further improvements could be achieved, since we have almost run out of new training data for language models. However, multimodal models open up enormous new reserves of training data—in the form of images, audio and videos. AIs such as Gemini, which can be directly trained on all of this data, are likely to have much greater capabilities going forward. For example, I would expect that models trained on video will develop sophisticated internal representations of what is called "naïve physics." This is the basic understanding humans and animals have about causality, movement, gravity and other physical phenomena. I am also excited about what this means for the competitive landscape of AI. For the past year, despite the emergence of many generative AI models, OpenAI's GPT models have been dominant, demonstrating a level of performance that other models have not been able to approach. Google's Gemini signals the emergence of a major competitor that will help to drive the field forward. Of course, OpenAI is almost certainly working on GPT-5, and we can expect that it will also be multimodal and will demonstrate remarkable new capabilities. All that being said, I am keen the see the emergence of very large multimodal models that are open-source and non-commercial, which I hope are on the way in the coming years. I also like some features of Gemini's implementation. For example, Google has announced a version called Gemini Nano, that is much more lightweight and capable of running directly on mobile phones. Lightweight models like this reduce the environmental impact of AI computing and have many benefits from a privacy perspective, and I am sure that this development will lead to competitors following suit.
  5. https://www.tomshardware.com/raspberry-pi/raspberry-pi-zerowriter-eink-typewriter-lets-you-take-notes-on-the-go The Raspberry Pi is no stranger to cyberdecks, with thousands of them having been created and shared over the last decade alone. Today, however, we have something a little more simple, yet just as complex, to share that uses one of our favorite SBCs, the Raspberry Pi Zero 2 W. This project is called ZeroWriter and it was created by a maker who goes by Tincangames over on Reddit. ZeroWriter is a completely portable DIY writing computer that features an eInk display and uses a 40% keyboard for input. This portable writing computer features a 4.2-inch eInk panel and has plenty of room for expansion. The project is completely open source so there’s plenty of opportunity for modification. You can add as much storage as you’d like and make all sorts of other adjustments for both usability and efficiency. Because the ZeroWriter is built around a Pi Zero, you can use pretty much any USB keyboard as long as it’s compatible. Tincangames is using a 40% Vortex Core keyboard and the code shared in the project files is designed specifically for it. There are also STL files available for anyone who wants to download and 3D print the chassis that holds everything together. If you want to recreate this Raspberry Pi project, you will need at minimum, about $50 (USD) worth of materials which includes a Pi Zero 2 W, a micro SD card, an eInk display, and a USB keyboard. To recreate Tincangame's ZeroWriter to a tee, you'll need more hardware and will have to spend about $200 in total. The software for the project was made by Tincangames just for the ZeroWriter and is Python-based. It allows you to do things like create new documents, save them, and power down. You can explore the code for this project over at the official ZeroWriter project page on GitHub. There are also more details about its creation over at Hackaday.
  6. Nick movie: Saw X Time:2023 Netflix / Amazon / HBO: N/A Duration of the movie: 1h 58m Trailer:
  7. I think we had already told you to use one account instead of the 4 you have, make up your mind!!
  8. xD congrats for unban 

     

  9. @ShadowViperKnight has been added to our team. Welcome..
  10. @7aMoDiHas Been Added To Our Team. Welcome...
  11. This post cannot be displayed because it is in a password protected forum. Enter Password
  12. Video title: ¡CORRI a 9999 Km/h para ser el JUGADOR #1 en HORSE RACE SIMULATOR!!! 🐎 ROBLOX Content creator ( Youtuber ) : Invictor Official YT video:
  13. Nick Movie: Nowhere Time: 2023 Netflix / Amazon / HBO: N/A Duration of the movie: 1h 48m Trailer:
  14. https://www.bbc.com/news/technology-67660964 The Ministry of Defence (MoD) has been fined £350,000 over an email blunder that exposed details of interpreters fleeing Afghanistan. The 265 people affected had worked with the UK government - some were in hiding when the Taliban seized control. Lives could have been at risk had data fallen into their hands, the data watchdog said. The MoD said it recognised the severity of the breach, fully acknowledged the ruling and apologised to the victims. The information commissioner, John Edwards said the error "let down those to whom our country owes so much". He added: "This was a particularly egregious breach of the obligation of security owed to these people, thus warranting the financial penalty my office imposes today," he added. Reply all The main breach was first revealed by the BBC in September 2021. It occurred when the Afghan relocations and assistance policy team (Arap) sent a mass email to 245 people who had worked with the UK government, who were eligible for evacuation. Most, but not all as interpreters, In the message, their addresses were put in the "to" field rather than the intended blind carbon copy (Bcc) field - meaning email addresses were visible to all recipients. Further information about those trying to leave Afghanistan, including one person's location, was then exposed when two people responded to the email by selecting "reply all". A MoD internal investigation found two similar incidents, bringing the total number of people affected to 265, the Information Commissioner's Office said. According to the ICO, the Bcc error is one of the top causes of data breaches. 'Could have cost lives' An interpreter affected by the breach, speaking in 2021, told the BBC the mistake "could cost the life of interpreters, especially for those who are still in Afghanistan." "Some of the interpreters didn't notice the mistake and they replied to all the emails already and they explained their situation which is very dangerous. The email contains their profile pictures and contact details." Former defence secretary Ben Wallace said at the time it would be an understatement to say he had been angered by the breach. The incident "let down the thousands of members of the armed forces and veterans," Mr Wallace told the House of Commons in September 2021. The ICO's investigation into the breach found between August and September 2021, the MoD failed to comply with UK data protection requirements for technical processes to safeguard data. It acknowledged the difficult circumstances under which the incident occurred but "when the level of risk and harm to people heightens, so must the response," Mr Edwards said. The watchdog said it had reduced an initial fine of £1m to £700,000 in recognition of the measures taken by the MoD to report the incident, limit its impact and the difficulties of the situation for teams handling the relocation of staff. This was cut further to £350,000 as part of an ongoing effort by the ICO to reduce the impact of government fines on the public. The MoD said it had "cooperated extensively" with the data watchdog to resolve the breach. "We recognise the severity of what has happened. We fully acknowledge today's ruling and apologise to those affected", a spokesperson said.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.