Jump to content

Titan ;x

Banned
  • Posts

    3,106
  • Joined

  • Last visited

  • Days Won

    8
  • Country

    Albania

Everything posted by Titan ;x

  1. Welcome Have fun
  2. Welcome Have fun
  3. Welcome Have fun
  4. Welcome back
  5. Welcome Have fun
  6. Welcome to csblackdevil Have fun
  7. Welcome back Have fun
  8. Titan ;x

    help?

    Hello Please give us a screenshot
  9. Introduction A filesystem is a way that an operating system organizes files on a disk. These filesystems come in many different flavors depending on your specific needs. For Windows, you have the NTFS, FAT, FAT16, or FAT32 filesystems. For Macintosh, you have the HFS filesystem and for Linux you have more filesystems than we can list in this tutorial. One of the great things about Linux is that you have the ability to access data stored on many different file systems, even if these filesystems are from other operating systems. In order to access a filesystem in Linux you first need to mount it. Mounting a filesystem simply means making the particular filesystem accessible at a certain point in the Linux directory tree. When mounting a filesystem it does not matter if the filesystem is a hard disk partition, CD-ROM, floppy, or USB storage device. You simply need to know the device name associated with the particular storage device and a directory you would like to mount it to. Having the ability to mount a new storage device at any point in the directory is very advantageous. For example, lets say that you have a web site stored in /usr/local/website. The web site has become very po[CENSORED]r and you are running out of space on your 36 GB hard drive. You can simply go out and purchase a new 73 GB hard drive, install it in the computer, and then mount that entire drive as /usr/local/. Now your /usr/local mount point has a total hard drive space of 73 GB, and you can free up the old hard drive by copying everything from the old /usr/local to the new one. As you can see, adding more hard drive space to a computer, while still keeping the same exact directory structure, is now very easy. Seeing a list of mounted filesystems In order to determine what filesystems are currently being used type the command: $ mount When you type this at a command prompt, this command will display all the mounted devices, the filesystem type it is mounted as, and the mount point. The mount point being local directory that is assigned to a filesystem during the process of mounting. How to mount filesystems Before you can mount a filesystem to a directory, you must be logged in as root (some filesystems can be mountable by a standard user) and the directory you want to mount the filesystem to must first exist. Also in some situations, you must be logged in as the root user in order to make the particular mount directory. If the directory exists, and any user can mount that particular device, then it is not necessary to be logged in as root. When mounting a particular filesystem or device you need to know the special device file associated with it. A device file is a special file in Unix/Linux operating systems that are used to allow programs and the user to communicate directly with the various partitions and devices on your computer. These device files are found in the /dev folder. As our first example, lets use a real world example of accessing your Windows files from a floppy in Linux. In order to mount a device to a particular folder, that folder must exist. Many Linux distributions will contain a /mnt folder, or even a /mnt/floppy folder, that is used to mount various devices. If the folder that you would like to mount the device to exists, then you are all set. If not you need to create it like this: $ mkdir /mnt/floppy This command will have now created a directory called /mnt/floppy. The next step would be to mount the filesystem to that folder or mount point. $ mount -t msdos /dev/fd0 /mnt/floppy You have now mounted an msdos filesystem, which is indicated by the -t (type) option. The device is recognized by the /mnt/floppy point. Now you can access MS-DOS formatted disks as you would any other directory. To mount a CD-ROM: $ mount -t iso9660 /dev/cdrom /mnt/cdrom Again this is a similar method as above to mount the CD-ROM. Different filesystems can also be mounted in a similar manner: $ mount -t vfat /dev/hda1 /win Any filesystems that are not mounted can be seen via the df command. So using that command you know what you got to work with. Note: The -t option should be used so that the operating system knows the specific filesystem type that you would like to mount the device as. If you leave the -t option out of the command, mount it will attempt to determine the correct filesystem type it should mount the device with. How to unmount a filesystem When you are done using a particular filesystem, you should unmount. The command to unmount a filesystem is the umount command. When unmounting a filesystem you simply type umount followed by the mount point. For example: $ umount /mnt/floppy $ umount /mnt/cdrom Conclusion Now that you know how to mount and unmount filesystems, even those from other operating systems, in Linux, using Linux should now be even more attractive and a much more powerful tool. For more information about the mount and umount commands you can view their man page (help files) by typing the following commands: $ man mount $ man umount
  10. As someone who cut their teeth on early versions of Unix, one thing I’ve really missed on Windows is the wide variety of powerful and flexible commands that come along with most Unix distros. As a practical matter, the Windows shell (and PowerShell) have grown over time to provide essentially the same capabilities, but why relearn everything? Especially for those of us who also have Linux-based dev systems, and Linux-based servers, it is really painful to have to do everything two different ways. Cygwin is nice, but not really seamless or all-encompassing. So I couldn’t wait to try out Canonical’s Ubuntu for Windows when Microsoft announced it would become part of Windows 10. I snagged the Insider build as soon as it was available, and so far, have not been disappointed. Getting Ubuntu on your Windows 10 system Presumably this will be easy once the final “Anniversary Edition” of Windows 10 ships, but for now it involves a few steps. For starters, you need the latest Windows Insider preview build 14316 installed. Then, you need to make sure you have Developer Mode enabled in Settings, and add the Windows Feature for “Windows Subsystem for Linux (Beta)” (which may in turn require a reboot). Then you can type bash at a command prompt, and the system will download a compressed Ubuntu file system, extract it, put your disk drives under /mnt, and you’re good to go! Microsoft’s full set of instructions and disclaimers are online. Bring your environment with you It’s a real productivity assist to have your favorite aliases available wherever you’re working, so the first thing I wanted to try was to bring over a .bashrc. I’ve gotten pretty lazy about my “.” files over the years, so I figured my simple one wasn’t a real test. Instead, I snagged this fairly complex, 99-line, .bashrc file, that tested out fine on my CentOS-powered server, and it did equally well on my Windows 10 + Ubuntu system. Cool! Packages, look we get packages Ubuntu on Windows 10 is surprisingly complete. Here I snagged the packages needed to add gitSomehow, no matter how much I read that this was “really” Ubuntu, I kept expecting something fairly limited, and certainly didn’t expect I could simply grab packages. But there it was, apt-get. I almost didn’t dare try it, but it worked! For example, git is not part of the default installation, but it was easy enough to have the needed packages retrieved and installed with a single command. Mac Envy — GONE As a longtime Sun employee, my favorite version of Unix has always been BSD. So I was really jealous that all my Mac-owning friends could use all the great commands that I’d gotten used to — want to get a file, use rcp (aka scp); want to archive, use tar, or gzip; and so on. The great thing about modern distros is that they’ve mostly absorbed all the possible Unix command sets into one big, glorious one. So these command are all pretty much either there, or can be aliased to a newer version (like vi to vim). Glitches and limits There are definitely some oddities when running in Ubuntu. For example here, even though I'm 'root' I can't get file attributes on some system files.Microsoft is very clear that this is a super-early release. As they state, the terminal emulation isn’t perfect. Many of the control characters don’t do what they should, and top doesn’t work right, for example. But I was amazed at how well all the basic stuff works. At some point you do hit the limits of what’s possible. For example, there is a netstat command, but it can’t find the devices it expects under /proc, so it doesn’t report much. I guess I can imagine even that being fixed before the final shipment, but it’s a pretty small price to pay to have a few system commands not work.
  11. Samsung announced this week that it’s begun production of 8Gb DDR4-3200 chips using its new ’10nm class’ production lines. According to Samsung, these new chips aren’t just a business-as-usual node shrink — the company had to perform some significant additional design steps to bring the hardware to market. First, a bit of clarification: This isn’t actually 10nm DRAM, though Samsung wouldn’t mind if you thought it was. Samsung’s PR helpfully states that “’10nm class’ refers to 10nm-class denotes a process technology node somewhere between 10 and 19 nanometers, while 20nm-class means a process technology node somewhere between 20 and 29 nanometers.” The company goes on to note that while its first “20nm-class” DDR3 came to market in 2011, it didn’t actually launch 20nm DDR3 until 2014. We expect something similar to be happening here. This kind of sleight-of-hand has become a bit of a Samsung trait; the company also likes to claim its EVO family of drives use “3-bit MLC” NAND as opposed to TLC, probably because the TLC moniker took a bit of a beating after the 840 EVO had so many long-term problems. But that’s a different topic. 10nm or not, Samsung claims that it had to adopt quadruple patterning lithography for its new DDR4, as well as develop a new proprietary cell design and new methods of ultra-thin dielectric layer deposition. The new DDR4 is expected to clock up to 3.2GHz — we’ll undoubtedly see third-party manufacturers ramping higher than that. DDR4-4266 is technically already available on NewEgg, provided you’re willing to pay $300 for 8GB of RAM. The performance benefits of that much memory frequency are questionable, to say the least, but we typically see a steady decrease in RAM price and an increase in memory frequencies over the life of any given RAM generation. DDR4 is still relatively young; it wouldn’t be surprising to see DDR4-4266 selling for a fraction of what it costs today in a few more years. The counter-argument to this, however, is the fact that Samsung is relying on quad patterning to manufacturer this DRAM. Quadruple patterning means that Samsung performs multiple additional lithography steps to manufacture its DRAM. There are multiple ways to perform multi-patterning and Samsung hasn’t specified which it uses, but the important thing to know for our purposes is that multi-patterning significantly increases manufacturing costs. DRAM produced by this method may not hit the same price points as older memory did, or it may simply take longer to decrease in price. Samsung intends to take what it’s learned from this new ’10nm-class’ product and deploy it in mobile form factors later this year. JEDEC’s LPDDR4 roadmap has a path to 4266 MHz already, and we may see Samsung rolling out high frequencies in the near [CENSORED]ure. As screen resolutions have skyrocketed, mobile GPUs have often struggled to keep pace, and adding faster RAM is the best way to improve performance in an otherwise-bottlenecked application.
  12. v1- Color , Effects
  13. Yesterday’s rumor mill proposed that the Tesla Model S would soon receive a facelift and a higher-capacity battery pack. We now have confirmation of half of that rumor, as Tesla’s website now shows a Model S with a revised, grille-less front end that more closely matches the Model X SUV and the recently revealed Model 3. There’s no word yet on the possible 100D model with a 100-kWh battery pack for increased range, although the updated Model S page does show increased range for the 90D and P90D models. The 90D goes from 270 miles to 294 miles, while the high-performance P90D goes from 253 miles to 270 miles. We don’t yet know where this extra range comes from, but we’ve reached out to Tesla for comment. Tesla CEO Clarifies Model 3 Features, Reservation Tally Via Twitter; We Explain What Happens to All that Deposit Money Semi-Autonomous Cars Compared! Tesla Model S vs. BMW 750i, Infiniti Q50 S, Mercedes-Benz S65 AMG Tesla Model S: Tests, Reviews, Specs, Pricing, and More Other changes include an upgraded 48-amp charger that replaces the old 40-amp system for faster charging, two new interior trim choices (Figured Ash Wood and Dark Ash Wood), and the special air-filtration system that was introduced for the Model X. We’ll keep you updated with more information as it becomes available.
  14. Young smokers please take note! Smokers face more problems in finding a job and when they do find a job, they earn considerably less than their non-smoker peers, says an interesting study. The findings showed that at 12 months, only 27 per cent of smokers had found jobs compared with 56 per cent of non-smokers. Among those who had found jobs by 12 months, smokers earned on average $5 dollars less per hour than non-smokers. “We found that smokers had a much harder time finding work than non-smokers,” said lead study author Judith Prochaska from Stanford University Medical Center in the US. The team surveyed 131 unemployed smokers and 120 unemployed non-smokers at the beginning of the study and then at six and 12 months. “The health harms of smoking have been established for decades and our study here provides insight into the financial harms of smoking both in terms of lower re-employment success and lower wages,” Prochaska added in a paper published in the journal JAMA Internal Medicine. They used survey questions and a breath test for carbon monoxide levels to classify job seekers into either daily smokers or non-smokers. Smokers were on average younger, less educated and in poorer health than non-smokers. “Such differences might influence job seekers’ ability to find work,” Prochaska stated. After controlling for these variables, smokers still remained at a big disadvantage. After 12 months, the re-employment rate of smokers was 24 percent lower than that of non-smokers. “We designed the analysis so that the smokers and non-smokers were as similar as possible in terms of the information we had on their employment records and prospects for employment at baseline,” added co-author Michael Baiocchi. Those who successfully quit smoking will have an easier time getting hired, the authors suggested.
  15. Using the search feature Once you've used Windows 8 for a while, you'll start to have more and more files, such as music, photos, and documents. It may sometimes be difficult to find the exact file that you want. You may even have trouble finding a specific app, since Windows 8 has moved everything around. Luckily, there is a built-in search feature, which can help you find files, apps, or almost anything else on your computer. To search from the Start screen: From the Start screen, type what you're looking for. Your search results will instantly appear below the search bar. A list of suggested web searches will appear below the results. Using different search options You can select different search options to help find specific files and settings. Just click the drop-down arrow above the search box, and then select the desired option: Select Settings or Files to search for a setting or file. Select Web images or Web videos to search the Web. Searching from the Desktop If you're on the Desktop, you will first need to press the Windows key to switch to the Start screen, and then type what you're looking for. Searching on a tablet If you're using a tablet without an attached keyboard, you can search by swiping in from the right and then selecting the Search charm. You can then type what you're looking for.
  16. Yesterday, Nvidia took the wraps off its high-end GP100 GPU and gave us a look at what its top-end HPC configuration would look like come Q1 2017. While this new card is explicitly aimed at the scientific computing market and Nvidia has said nothing about [CENSORED]ure consumer products, the information the company revealed confirms some of what we’ve privately heard about next-generation GPUs from both AMD and Nvidia. If you’re thinking about using some of your tax rebate on a new GPU or just eyeing the market in general, we’d recommend waiting at least a few more months before pulling the trigger. It may even be worth waiting until the end of the year based on what we now know is coming down the pipe. What to expect when you’re expecting (a new GPU) First, a bit of review: We already know AMD is launching a new set of GPUs this summer, codenamed Polaris 10 and Polaris 11. These cores are expected to target the sweet spot of the add-in-board (AIB) market, which typically means the $199 – $299 price segment. High-end cards like the GTX 980 Ti and Fury X may command headlines, but both AMD and Nvidia ship far more GTX 960s and Radeon R7 370s than they do top-end cards. Polaris 10 and 11 are expected to use GDDR5 rather than HBM (I’ve seen the rumors that claim some Polaris SKUs might use HBM1 — it’s technically possible, but I think it exceedingly unlikely) and AMD has said these new GPUs will improve performance-per-watt by 2.5x compared with their predecessors. The company’s next-generation Vega GPU family, which arrives late this year, is rumored to be the first ground-up new architecture since GCN debuted in 2012 with 4,096 shader cores and HBM2 memory. We don’t know yet what Nvidia’s plans are for any consumer-oriented Pascal cards, but the speeds and core counts on GP100 tell us rather a lot about the benefits of 16nm FinFET and how it will impact Nvidia’s product lines this generation. With GP100, Nvidia increased its core count by 17% while simultaneously ramping up the base clock by 40%. Baseline TDP for this GPU, meanwhile, increased by 20%, to 300W. The relationship between clock speed, voltage, and power consumption is not linear, but the GTX Titan X shipped with a base clock of 1GHz, only slightly higher than the Tesla M40’s 948MHz. The GP100 has up to 60 SM units (only 56 are enabled), which puts the total number of cores on-die at 3,840. That’s 25% more cores than the old M40, but the die is just 3% larger. We may not know details, but the implications are straightforward: Nvidia should be able to deliver a high-end consumer card with 30-40% higher clocks and significantly higher core counts within the same price envelopes that Maxwell occupies today. We don’t know when Team Green will start refreshing its hardware, but it’ll almost certainly be within the next nine months. Here’s the bottom line: AMD is going to start refreshing its midrange cards this summer, and it’d be unusual if Nvidia didn’t have fresh GPUs of its own to meet them. Both companies will likely follow with high-end refreshes towards the end of the year or very early next year, again, probably within short order of each other. When waiting makes sense There’s a cliche in the tech industry that claims it’s foolish to try and time your upgrades because technology is always advancing. 10-12 years ago, when AMD and Nvidia were nearly doubling their top-end performance every single year, this kind of argument made sense. Today, it’s much less valid. Technology advances year-on-year, but the rate and pace of those advances can vary significantly. The 14/16nm node is a major stepping stone for GPU performance because it’s the first full-node shrink that’s been available to the GPU industry in more than four years. If you care about low power consumption and small form factors, upcoming chips should be dramatically more power efficient. If you care about high-end performance, you may have to wait another nine months, but the amount of GPU you’ll be able to buy for the same amount of money should be 30-50% higher than what you’ll get today. There’s also the question of VR technology. We don’t know yet how VR will evolve or how seriously it will impact the [CENSORED]ure of gaming; estimates I’ve seen range from total transformation to a niche market for a handful of well-heeled enthusiasts. Regardless, if you plan on jumping on the VR bandwagon, it behooves you to wait and see what kind of performance next-generation video cards can offer. Remember this: VR technology demands both high frame rates and extremely smooth frame delivery, and this has knock-on effects on which GPUs can reliably deliver that experience. A GPU that drives 50 frames per second where 30 is a minimum requirement is pushing 1.67x more frames per second than the user demands as a minimum standard. A GPU that delivers 110 frames per second where 90 is a minimum requirement is only 1.22x above the target frame rate. It doesn’t take much in the way of additional eye candy before our second GPU is bottoming out at 90 FPS again. The final reason to consider delaying an upgrade is whether you plan to upgrade to a 4K monitor at any point in the next few years. 4K pushes roughly 4x more pixels than 1080p monitors and modern graphics cards are often 33-50% slower when gaming at that resolution. Waiting a few more months to buy at the beginning of the new cycle could mean 50% more performance for the same price and gives you a better chance of buying a card that can handle 4K in a wider variety of titles. If your GPU suddenly dies tomorrow or you can’t stand running an old HD 5000 or GTX 400-series card another minute, you can upgrade to a newer AMD or Nvidia model available today and still see an enormous performance uplift — but customers who can wait for the next-generation refreshes to arrive will be getting much more bang for their buck. We don’t know what the exact specs will be for any specific AMD or Nvidia next-gen GPU, but what we’re seeing and hearing about the 16/14nm node is extremely encouraging. If you can wait, you almost certainly won’t regret it — especially if you want a clearer picture on which company, AMD or Nvidia, performs better in DirectX 12.
  17. For decades, computer chips became smaller and more efficient by shrinking the size of various features and finding ways to pack more transistors into a smaller area of silicon. As die shrinks have become more difficult, companies have turned to 3D die stacking and technologies like HBM (High Bandwidth Memory) to improve performance. We’ve talked a great deal about HBM and HBM2 in the past few years, but photographic evidence of the die savings is a bit harder to come by. SK Hynix helpfully had some HBM memory on display at GTC this year, and Tweaktown caught photographic evidence of 8Gb of GDDR5 compared with a 1 GB HBM stack and a 4GB HBM2 stack. The one quibble I have with the Hynix display is that the labeling mixes GB and Gb. The HBM2 package is significantly larger than the HBM1 chip, but still much smaller than the 8Gb of GDDR5, despite packing 4x more memory into its diminutive form factor. We don’t expect HBM2 to hit market until the tail end of this year and the beginning of next; GDDR5 is expected to have one last hurrah with the launch of AMD’s Polaris this year. These space savings, however, illustrate why both AMD and NV are moving to HBM2 at the high end. Smaller dies mean smaller GPUs with higher memory densities for both consumer, professional, and scientific applications. Technologies like GDDR5X, which rely on 2D planar silicon, can’t compete with the capacity advantage of layering multiple chips on top of each other and connecting them with TSVs (through silicon vias). GDDR5 will continue to be used for budget and midrange cards this generation, but HBM2 will likely replace it over the long term as prices fall, lower-end cards require more VRAM, and manufacturer yields improve. Over the long term, though, even HBM2 isn’t enough to feed the needs of next-generation exascale systems. The slide above is from an Nvidia presentation on high performance computing (HPC) and the energy requirements of DRAM subsystems. Shifting to HBM drives a significant improvement in I/O power and an absolute improvement in total power consumption for the DRAM subsystem. HBM2 draws less power to provide 1TB/s of bandwidth than GDDR5 used to prove 200GB/s. Unfortunately, straightforward scaling of the HBM2 interface won’t prevent [CENSORED]ure memory standards from exceeding GDDR5 power requirements. Long-term, additional improvements and process node shrinks are still necessary — even if die-stacking has replaced planar silicon die shrinks as the primary performance driver.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.