Available Physical Memory Lower Than Total

  1. Available Physical Memory Is Less Than Total Windows 10
  2. Physical Memory Available
  3. Available Physical Memory Lower Than Total Income
  4. Available Physical Memory Low

Available Physical Memory: 5.51GB I don't know why 'Total Physical Memory' is.5gb lower than the installed, but I'm more concerned about the 'Available Physical Memory.' I understand that hardware reserves some RAm, which lowers the available amount, but it seems kinda low. When the Total Server Memory and Target Server Memory values are close, there’s no memory pressure on the server. In other words, the Total Server Memory/ Target Server Memory ratio should be close to 1. If the Total Server Memory value is significantly lower than the Target Server Memory value during normal SQL Server operation, it can mean that there’s memory pressure on the server so. Dear Sir, I have installed OS Windows 7 64 bit. In my system information, I am getting much lower physical memory than total physical memory. Total physical memory 5.92 GB Available Physical memory 1.92 Does this mean that in the current situation. Dec 18, 2019 Available Physical Memory; The reporting in the following diagnostic tool has not changed: The Performance tab in Task Manager; When the physical RAM that is installed on a computer equals the address space that is supported by the chipset, the total system memory that is available to the operating system is always less than the physical RAM.

-->

October 2016

Volume 31 Number 10

[Universal Windows Platform]

By Andrew Whitechapel

Far more than any other app platform, the Universal Windows Platform (UWP) supports a vast range of background activities. If these were allowed to compete for resources in an uncontrolled manner, it would degrade the foreground experience to an unacceptable level. All concurrent processes compete for system resources—memory, CPU, GPU, disk and network I/O, and so on. The system Resource Manager encapsulates rules for arbitrating this contention, and the two most important mechanisms are memory limits and task priorities.

The promise of the UWP is that a developer can build an app that will run successfully on a wide range of Windows 10 platforms, from a minimalist IoT device, to the full range of mobile and desktop devices, plus Xbox and HoloLens. Resource policy applies to all Windows 10 platforms, and most policy is common across the range—specifically to support the UWP promise of consistency. That said, some aspects of policy do vary, because different platforms support different sets of hardware devices with different capabilities.

So, for example, the memory limits on a Lumia 950 phone are almost identical to those on a HoloLens because these two devices have similar RAM characteristics and other hardware capabilities. Conversely, the Lumia 950 limits are significantly higher than on a Lumia 650, which has far less physical RAM and a lower hardware specification, generally. Pagefile is another factor: Desktop devices have a dynamically sizeable pagefile that’s also often very fast, whereas on all other Windows 10 devices, the pagefile is small, slow and a fixed-size. This is one reason why memory limits are completely removed on desktop, but enforced on all other devices.

In a few well-defined scenarios, memory limits can also vary at different times on the same device, so apps should take advantage of the Windows.System.MemoryManager APIs to discover the limit that’s actually applied at any point in time. This API will always reliably tell the app its current limit and its current usage—and these same values are exactly the values that the Resource Manager uses in its own internal calculations. In the following example, the app pays attention to its memory limit, and before it attempts a memory-intensive operation, it checks to see that it does in fact have enough headroom available for this operation:

It helps to think of memory as just another device capability. That is, it’s common for an app to test the availability of the device features it can use. Is there a compass on this device? Is there a forward-­facing camera? Also, some features are available only in certain app states. For example, if a device has a microphone, it’s almost always available to the app in the foreground, but typically not available to any background task. So it behooves the app to check availability at different times. In the same way, the app should be testing how much memory is available to it at any given time. The app can adapt to this by, for example, selecting different image resolutions, or different data transfer options, or even by completely enabling or disabling certain app features. Documentation for the MemoryManager API is at bit.ly/2bqepDL.

Memory Limits

What happens if an app hits its limit? Contrary to popular belief, in most cases, the Resource Manager doesn’t terminate apps for out-of-memory conditions. Instead, if the app does something that would result in a memory allocation that would exceed its limit, the allocation fails. In some cases, the failure is surfaced to the app (as an OutOfMemoryException in a managed code app, or a null pointer in a native app). If this happens, the app can handle the failure. If not, the app will crash. Consider the following examples. DoSomething is allocating simple byte array memory in an infinite loop that will eventually result in an OutOfMemory­Exception, which the app can handle:

Conversely, DoAnother is using imaging APIs in an infinite loop that are internally allocating memory on the native heap for graphics data. This allocation is outside the app’s direct control, and when it fails, it will almost certainly not propagate any exception that can be handled to the app and, therefore, the app will simply crash:

The scenario is a little contrived, as no app would realistically expect to be able to create an infinite number of bitmaps, but the point is that some allocation failures are easily handled while others are not. You should handle OutOfMemoryExceptions when you can, and examine your app code for scenarios where memory is allocated outside your direct control; police these areas carefully to avoid failures. You’re more likely to be successful handling exceptions for operations that allocate large amounts of memory—attempting to handle OutOfMemoryExceptions for small allocations is usually not worth the added complexity. It’s also worth noting that an app can hit an OutOfMemoryException well below its limit if it’s making very large allocations—and especially in managed code. This can arise as a result of address space fragmentation for your process. For example, the DoSomething method is allocating 10MB blocks, and it will hit OutOfMemoryException sooner than if it were allocating 1MB blocks. Finally, it must be said that the cases where your app can handle an OutOfMemoryException and continue in a meaningful way are rare; in practice, it’s more often used as an opportunity to clean up, notify the user and then fail gracefully.

Using Task Priorities to Resolve Contention

The system arbitrates between competing task types by weighing the relative importance of each user scenario. For example, the system generally assigns a higher priority to the app with which the user is actively engaged, and a lower priority to background activity of which the user might even be completely unaware. Even among background tasks there are different priority levels. For example, VoIP and push notification tasks are typically higher priority than time-triggered tasks.

When the user launches an app, or when a trigger event tries to activate a background task, the Resource Manager checks to see if there are sufficient free resources for this request. If there are, the activation goes ahead. If not, it then examines all running tasks and starts canceling (or in some cases rudely terminating) tasks from the lowest priority upward until it has freed enough resources to satisfy the incoming request.

Prioritization is finely nuanced, but everything falls into one of two broad priority categories, summarized in Figure 1.

Figure 1 The Two Broad Categories of App Task

CategoryTypical ExamplesDescription
Critical tasksForeground app activations and some important background tasks such as VoIP, background audio playback and any background task invoked directly by a foreground app.These are effectively always guaranteed to run whenever requested (except in cases of extreme and unexpected system process activity).
Opportunistic tasksEverything else.These are only allowed to launch (or to continue to run) when there are sufficient available resources and there’s no higher-priority task contending those resources. There are multiple finely grained priority levels within this category.

Soft and Hard Memory Limits

Resource policy limits ensure that no one app can run away with all the memory on the device to the exclusion of other scenarios. However, one of the side effects is that a situation can arise where a task can hit its memory limit even though there might be free memory available in the system.

The Windows 10 Anniversary Update addresses this by relaxing the hard memory limits to soft limits. To best illustrate this, consider the case of extended execution scenarios. In previous releases, when an app is in the foreground it has, say, a 400MB limit (a fictitious value for illustration only), and when it transitions to the background for extended execution, policy considers it to be less important—plus it doesn’t need memory for UI rendering—so its limit is reduced to perhaps 200MB. Resource policy does this to ensure that the user can successfully run another foreground app at the same time. However, in the case where the user doesn’t run another foreground app (other than Start), or runs only a small foreground app, the extended execution app may well hit its memory limit and crash even though there’s free memory available.

So in Windows 10 Anniversary Update, when the app transitions to extended execution in the background, even though its limit is reduced, it’s allowed to use more memory than its limit. In this way, if the system isn’t under memory pressure, the extended execution app is allowed to continue, increasing the likelihood that it can complete its work. If the app does go over its limit, the MemoryManager API will report that its AppMemoryUsageLevel is OverLimit. It’s important to consider that when an app is over-limit, it’s at higher risk of getting terminated if the system comes under memory pressure. The exact behavior varies per platform: Specifically, on Xbox, an over-limit app has two seconds to get itself below its limit or it will be suspended. On all other platforms, the app can continue indefinitely unless and until there’s resource pressure.

The net result of this change is that more tasks will be able to continue in the background more often than before. The only downside is that the model is slightly less predictable: Previously, a task that attempted to exceed its limit would always fail to allocate (and likely crash). Now, the allocation-failure-and-crash behavior doesn’t always follow: The task will often be allowed to exceed its limit without crashing.

The Resource Manager raises the AppMemoryUsageIncreased event when an app’s memory usage increases from any given level to a higher level, and conversely, the AppMemoryUsageDecreased event when it decreases a level. An app can respond to AppMemory­UsageIncreased by checking its level and taking appropriate action to reduce its usage:

Then, when it has successfully reduced its usage, it can expect to get a further notification that it has fallen to a safer level, via an AppMemoryUsageDecreased event:

An app can also sign up for the AppMemoryUsageLimitChanging event, which the Resource Manager raises when it changes an app’s limit. The OverLimit scenario deserves special handling, because of the associated change in priority. An app can listen to the notification event that’s raised when the system changes its limit, so it can immediately take steps to reduce its memory consumption. For this scenario, you should use the old and new limit values passed in as payload of the event, rather than querying the AppMemoryUsageLevel directly:

Extended execution is just one of the scenarios where the limit is changed. Another common scenario is where the app calls exter­nal app services—each of these will reduce the calling app’s limit for the duration of the call. It’s not always obvious when an app is calling an app service: For example, if the app uses a middleware library, this might implement some APIs as app services under the covers. Or, if the app calls into system apps, the same might happen; Cortana APIs are a case in point.

ProcessDiagnosticInfo API

Commit usage is the amount of virtual memory the app has used, including both physical memory and memory that has been paged out to the disk-backed pagefile. Working set is the set of memory pages in the app’s virtual address space that’s currently resident in physical memory. For a detailed breakdown of memory terminology, see bit.ly/2b5UwjL. The MemoryManager API exposes both a GetAppMemoryReport and a GetProcessMemoryReport for commit metrics and working-set metrics, respectively. Don’t be misled by the names of the properties—for example, in the AppMemory­Report class, the private commit used by the app is represented by PrivateCommitUsage (which seems obvious), whereas in the ProcessMemoryUsageReport class the same value is represented by PageFileSizeInBytes (which is a lot less obvious). Apps can also use a related API: Windows.System.Diagnostics.ProcessDiagnosticInfo. This provides low-level diagnostic information on a per-process basis, including memory diagnostics, CPU and disk-usage data. This is documented at bit.ly/2b1IokD. There’s some overlap with the MemoryManager API, but there’s additional information in ProcessDiagnosticInfo beyond what’s available in MemoryManager. For example, consider an app that allocates memory, but doesn’t immediately use it:

You could use the ProcessMemoryReport or ProcessMemoryUsageReport to get information about commit and working-set, including private (used only by this app), total (includes private plus shared working set), and peak (the maximum used during the current process’s lifetime so far). For comparison, note that the memory usage reported by Task Manager is the app’s private working-set:

Each time the app calls its ConsumeMemory method, more commit is allocated, but unless the memory is used, it doesn’t significantly increase the working set. It’s only when the memory is used that the working set increases:

Most apps only need to focus on commit (which is what the Resource Manager bases its decisions on), but some more sophisticated apps might be interested in tracking working-set, also. Some apps, notably games and media-intensive apps, rapidly switch from one set of data to the next (think graphics buffers), and the more their data is in physical memory, the more they can avoid UI stuttering and tearing.

Also, you can think of memory as a closed ecosystem: It can be useful to track your working-set just to see how much pressure you’re putting on the system as a whole. Certain system operations—such as creating processes and threads—require physical memory, and if your app’s working-set usage is excessive this can degrade performance system-wide. This is particularly important on the desktop, where policy doesn’t apply commit limits.

GlobalMemoryStatusEx API

From the Windows 10 Anniversary Update, apps also have available to them the Win32 GlobalMemoryStatusEx API. This provides some additional information beyond the Windows RT APIs, and while most apps will never need to use it, it has been provided for the benefit of UWP apps that are highly complex and have very finely tuned memory behaviors. To use this API you also need the MEMORYSTATUSEX struct, as shown in Figure 2.

Figure 2 Importing the GlobalMemoryStatusEx Win32 API

Then, you can instantiate this struct and pass it to GlobalMemoryStatusEx, which will fill in the struct fields on return:

Again, don’t be misled by the names of the fields. For example, if you’re interested in the size of the pagefile, don’t just look at ullTotalPageFile, because this actually represents the current maximum amount of commit, which includes both the pagefile and physical memory. So, what most folks understand as the pagefile size is computed by subtracting the ullTotalPhys value from the ullTotalPageFile value, like so:

Also note that ullTotalPhys is not the total amount of memory physically installed on the device. Rather, it’s the amount of physical memory the OS has available to it at boot, which is always slightly less than the absolute total of physical memory.

Another interesting value returned is dwMemoryLoad, which represents the percentage of physical memory in use system-wide. In some environments it’s important for an app’s memory usage to be mostly in physical memory, to avoid the disk I/O overhead of using the pagefile. This is especially true for games and media apps—and critically important for Xbox and HoloLens apps.

Remember this is a Win32 API so it will return information that doesn’t account for the UWP sandbox and, in particular, it has no knowledge of resource policy. So, for example, the value returned in ullAvailPhys is the amount of physical memory currently available on the system, but this doesn’t mean that this memory is actually available to the app. On all platforms apart from the desktop, it’s likely to be significantly more than the amount of memory the current UWP app will actually be allowed to use, because its commit usage is constrained by policy, regardless of available physical memory.

For most apps, the MemoryManager API gives you all you need, and all the metrics that you can directly and easily influence in your app. ProcessDiagnosticInfo and GlobalMemoryStatusEx include some additional information that you can’t directly influence, but which a more sophisticated app might want to pivot off for logic decisions, for profiling during development, and for telemetry purposes.

Visual Studio Diagnostics Tools

The memory diagnostic tool in Visual Studio 2015 updates its report in real time during debugging. You can turn this on while in a debug session by selecting the Debug menu, and then Show Diagnostic Tools, as shown in Figure 3.


Figure 3 Analyzing Process Memory in the Visual Studio Diagnostic Tools

The live graph in the Process Memory window tracks private commit, which corresponds to the AppMemoryReport.PrivateCommitUsage and the ProcessMemoryUsageReport.PageFileSizeInBytes. It doesn’t include shared memory, so it represents only part of the metric reported in MemoryManager.AppMemoryUsage, for example. Note that if you hover over any point in the graph, you’ll get a tooltip with usage data for that point in time.

You can also use the tool to take snapshots for more detailed comparisons. This is especially useful if you’re trying to track down a suspected memory leak. In the following example, the app has two methods, one that allocates memory (simulating a leak) and the other that’s naively attempting to release that memory:

A glance at the memory graph will show that the memory isn’t actually getting released at all. In this example, the simplest fix is to force a garbage collection. Because collection is generational, in scenarios where you have complex object trees (which this example doesn’t), you might need to make two collection passes, and also wait for the collected objects’ finalizers to run to completion. If your app is allocating large objects, you can also set GCLargeObjectHeapCompactionMode to compact the Large Object Heap when a collection is made. The Large Object Heap is used for objects greater than 80KB; it’s rarely collected unless forced; and even when collected, it can leave heap fragmentation. Forcing it to be compacted will increase your app’s chances of allocating large objects later on. Note that the garbage collector generally does a very good job on its own without prompting from the app—you should profile your app carefully before deciding whether you need to force a collection at any time:

The example in Figure 3 shows three snapshots, taken before allocating memory, after allocating memory and then after releasing memory (using the updated version of the code). The increase and decrease in memory usage is clear from the graph.

The blue arrows show where the snapshots were taken. The gold arrow shows where a garbage collection was done. The snapshot data is listed in the Memory Usage window below, including details of the heap. The red up arrow in this list indicates where memory usage increased relative to the previous snapshot; conversely the green down arrow shows where it decreased. The delta in this case is 10MB, which matches the allocation done in the code. The Object column lists the total number of live objects on the heap—but the more useful count is the Diff. Both counts are also hyperlinks: for example, if you click the Diff count, it will expand out a detailed breakdown of the increase or decrease in objects allocated by the app at that point in time. From this list, you can select any object to get a more detailed view. For example, select the MainPage in the object window, and this will pull up a breakdown in the Referenced Types window, as shown in Figure 4. The size increase in this example is clearly for the 10MB array.


Figure 4 Examining the Referenced Types in a Memory Snapshot

The Referenced Types view shows you a graph of all types your selected type is referencing. The alternative view is the Paths to Root view, which shows the opposite—the complete graph of types rooting your selected type; that is, all the object references that are keeping the selected object alive.

You can choose to focus either on managed memory allocations, native memory allocations or both. To change this, select the Project menu, then the project Properties. On the Debug tab, select the Debugger type (Managed, Native or Mixed), then turn on Heap Profiling in the Memory Usage, as shown in Figure 5. In some cases, as in this example, you’ll see that even though the app is forcing a garbage collection and cleaning up managed allocations, the number and size of native allocations actually increases. This is a good way to track the full memory usage effects of any operation your app performs, rather than focusing solely on the managed side of things.

Available Physical Memory Lower Than Total


Figure 5 Tracking Both Managed and Native Heap Usage

This is turned off by default because profiling the native heap while debugging will significantly slow down the app’s performance. As always, profiling should be done on a range of target devices—and preferably using real hardware rather than emulators, as the characteristics are different. While profiling is useful during development, your app can continue to use the MemoryManager and related APIs during production, for making alternate feature decisions in production and for telemetry. On top of that, because they can be used outside the debugging environment—and without the memory overhead of debugging—they more accurately represent the app’s behavior in real use.

Wrapping Up

Users typically install many apps on their devices, and many apps have both foreground and background components. Resource policy strives to ensure that the limited resources on the device are apportioned thoughtfully, in a way that matches the user’s expectations. Priority is given to activities the user is immediately aware of, such as the foreground app, background audio or incoming VoIP calls. However, some resources are also allocated to less important background tasks, to ensure that, for example, the user’s tiles are updated in a timely manner, e-mail is kept synced in the background, and that app data can be kept refreshed ready for the next time the user launches an app.

Resource policy is consistent across all Windows 10 platforms, although it also allows for variability in device capabilities. In this way, a UWP app can be written to target Windows desktop, mobile, Xbox, HoloLens or IoT while being resilient to device variation. The app platform offers a set of APIs the app can use to track its resource usage, to respond to notifications from the system when interesting resource-related events happen and to tune its behavior accordingly. The app developer can also use debugging tools in Visual Studio to profile his app, and eliminate memory leaks.

Andrew Whitechapelis a program manager in the Microsoft Windows division, responsible for the app execution and resource policy for the Universal Windows Application Platform.

Thanks to the following technical experts for reviewing this article: Mark Livschitz and Jeremy Robinson
Mark Livschitz is a Software Engineer working on the Base and Kernel team for the Microsoft Windows division.

Jeremy Robinson is a Software Engineer working on the Developer Platform for the Microsoft Windows division.

Computer common configuration, 8GB+120GB, 16GB+240GB (+3TB), the former is the memory and the latter are the flash (or hard disk). The size of the memory configuration is also an important factor when we buy notebook. But you may have a problem that why is my available memory so low.This post will provide some reasons and solutions.



Part1: So, what is available physical memory is low and how is it caused?

RAM

RAM, also known as 'random storage,' is an internal storage that exchanges data directly with the processor, also called Random Access Memory. It can be read and written at any time, and is fast, often acting as a temporary data storage medium for operating systems or other running programs.RAM cannot retain data when the power is turned off. If you need to save data,you must write them to a storage device (such as a hard disk).


ROM

ROM is also known as 'only read storage.' The whole machine can only be read during the working process, instead of being rewritten quickly and conveniently like the random access memory. The data stored in the ROM is stable, and the data stored after the power is turned off will not loss.

Compared with RAM and ROM, the biggest difference between the two is that the data stored in the RAM after the power is turned off will disappear automatically, and the ROM will not disappear automatically, and can be saved after power off.


Part2: We Will Occur when Computer not Using Enough Available Memory

1. Open the software, the browser is slow

2. The software page turns grey

3. Mouse icon turns into a circle, and it doesn't stop

4. The active program is stuck and not responding


Part3: How to fix computer is low on memory warning windows 10

1. Upgrade to a computer with a higher configuration

2. Increase memory space

3. Do not open too many programs at the same time

4. Clean up useless files in computer

5. Increase virtual memory

6. Clean up programs running in the background, free up RAM space


How to increase available physical memory windows 10:

Virtual memory is to borrow a little space from the hard disk to make virtual memory, no need to upgrade the RAM.

Set virtual memory:

Total

1. Right click on 'This PC' and open the last option'Properties'

2. After entering the system properties, select 'Advanced System Settings' on the left, and then select the first setting, which is the 'Settings' of 'Performance'.

Available Physical Memory Is Less Than Total Windows 10

3. After entering the performance options, select the second option'Advanced' above, then click 'Change' to change the virtual memory.

4. Select the drive 'C[System]', select 'Custom Size', and then change the 'Initial Size' and'Maximum' by yourself. Try to set a large point as much as possible.

5. You can also refer to the 'Recommended Value' of the total number of pages of all drives on this page. And the 'free space' of the selected drive.

6. To click 'Settings', then click OK, then click on'Apply' and OK.


Available Physical Memory Lower Than Total

Note: There is a focal point that is not common, but it cannot be ignored.

The 32-bit operating system can only recognize up to 3.25G of memory, so if your memory space is above 4G, you must use a 64-bit operating system. Therefore,many people use a 32-bit operating system but find that their laptop can only recognize available physical memory less than half.


Physical Memory Available

Since the program running on the computer needs to be executed by the memory, if the active program occupies a large proportion of memory, the available physical memory is getting low. The above describes some solutions, you only need to follow the steps, you will be able to solve the problem you encountered.



Available Physical Memory Lower Than Total Income

Related Articles:



Available Physical Memory Low