summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2017-11-15ALSA: timer: Limit max instances per timerTakashi Iwai3-13/+57
commit 9b7d869ee5a77ed4a462372bb89af622e705bfb8 upstream. Currently we allow unlimited number of timer instances, and it may bring the system hogging way too much CPU when too many timer instances are opened and processed concurrently. This may end up with a soft-lockup report as triggered by syzkaller, especially when hrtimer backend is deployed. Since such insane number of instances aren't demanded by the normal use case of ALSA sequencer and it merely opens a risk only for abuse, this patch introduces the upper limit for the number of instances per timer backend. As default, it's set to 1000, but for the fine-grained timer like hrtimer, it's set to 100. Reported-by: syzbot Tested-by: Jérôme Glisse <jglisse@redhat.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ARM: 8720/1: ensure dump_instr() checks addr_limitMark Rutland1-10/+18
commit b9dd05c7002ee0ca8b676428b2268c26399b5e31 upstream. When CONFIG_DEBUG_USER is enabled, it's possible for a user to deliberately trigger dump_instr() with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. So that we can use the same code to dump user instructions and kernel instructions, the common dumping code is factored out to __dump_instr(), with the fs manipulated appropriately in dump_instr() around calls to this. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ACPI / scan: Enable GPEs before scanning the namespaceRafael J. Wysocki1-3/+3
commit eb7f43c4adb4a789f99f53916182c3401b4e33c7 upstream. On some systems the platform firmware expects GPEs to be enabled before the enumeration of devices and if that expectation is not met, the systems in question may not boot in some situations. For this reason, change the initialization ordering of the ACPI subsystem to make it enable GPEs before scanning the namespace for the first time in order to enumerate devices. Reported-by: Mika Westerberg <mika.westerberg@linux.intel.com> Suggested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Lv Zheng <lv.zheng@intel.com> Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ACPICA: Make it possible to enable runtime GPEs earlierRafael J. Wysocki3-1/+12
commit 1312b7e0caca44e7ff312bc2eaa888943384e3e1 upstream. Runtime GPEs have corresponding _Lxx/_Exx methods and are enabled automatically during the initialization of the ACPI subsystem through acpi_update_all_gpes() with the assumption that acpi_setup_gpe_for_wake() will be called in advance for all of the GPEs pointed to by _PRW objects in the namespace that may be affected by acpi_update_all_gpes(). That is, acpi_ev_initialize_gpe_block() can only be called for a GPE block after acpi_setup_gpe_for_wake() has been called for all of the _PRW (wakeup) GPEs in it. The platform firmware on some systems, however, expects GPEs to be enabled before the enumeration of devices which is when acpi_setup_gpe_for_wake() is called and that goes against the above assumption. For this reason, introduce a new flag to be set by acpi_ev_initialize_gpe_block() when automatically enabling a GPE to indicate to acpi_setup_gpe_for_wake() that it needs to drop the reference to the GPE coming from acpi_ev_initialize_gpe_block() and modify acpi_setup_gpe_for_wake() accordingly. These changes allow acpi_setup_gpe_for_wake() and acpi_ev_initialize_gpe_block() to be invoked in any order. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ACPICA: Dispatch active GPEs at init timeRafael J. Wysocki1-9/+19
commit ecc1165b8b743fd1503b9c799ae3a9933b89877b upstream. In some cases GPEs are already active when they are enabled by acpi_ev_initialize_gpe_block() and whatever happens next may depend on the result of handling the events signaled by them, so the events should not be discarded (which is what happens currently) and they should be handled as soon as reasonably possible. For this reason, modify acpi_ev_initialize_gpe_block() to dispatch GPEs with the status flag set in-band right after enabling them. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15ACPI / PM: Blacklist Low Power S0 Idle _DSM for Dell XPS13 9360Rafael J. Wysocki1-0/+28
commit 71630b7a832f699d6a6764ae75797e4e743ae348 upstream. At least one Dell XPS13 9360 is reported to have serious issues with the Low Power S0 Idle _DSM interface and since this machine model generally can do ACPI S3 just fine, add a blacklist entry to disable that interface for Dell XPS13 9360. Fixes: 8110dd281e15 (ACPI / sleep: EC-based wakeup from suspend-to-idle on recent systems) Link: https://bugzilla.kernel.org/show_bug.cgi?id=196907 Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Tested-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15KEYS: fix NULL pointer dereference during ASN.1 parsing [ver #2]Eric Biggers1-2/+2
commit 624f5ab8720b3371367327a822c267699c1823b8 upstream. syzkaller reported a NULL pointer dereference in asn1_ber_decoder(). It can be reproduced by the following command, assuming CONFIG_PKCS7_TEST_KEY=y: keyctl add pkcs7_test desc '' @s The bug is that if the data buffer is empty, an integer underflow occurs in the following check: if (unlikely(dp >= datalen - 1)) goto data_overrun_error; This results in the NULL data pointer being dereferenced. Fix it by checking for 'datalen - dp < 2' instead. Also fix the similar check for 'dp >= datalen - n' later in the same function. That one possibly could result in a buffer overread. The NULL pointer dereference was reproducible using the "pkcs7_test" key type but not the "asymmetric" key type because the "asymmetric" key type checks for a 0-length payload before calling into the ASN.1 decoder but the "pkcs7_test" key type does not. The bug report was: BUG: unable to handle kernel NULL pointer dereference at (null) IP: asn1_ber_decoder+0x17f/0xe60 lib/asn1_decoder.c:233 PGD 7b708067 P4D 7b708067 PUD 7b6ee067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: CPU: 0 PID: 522 Comm: syz-executor1 Not tainted 4.14.0-rc8 #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.3-20171021_125229-anatol 04/01/2014 task: ffff9b6b3798c040 task.stack: ffff9b6b37970000 RIP: 0010:asn1_ber_decoder+0x17f/0xe60 lib/asn1_decoder.c:233 RSP: 0018:ffff9b6b37973c78 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000000021c RDX: ffffffff814a04ed RSI: ffffb1524066e000 RDI: ffffffff910759e0 RBP: ffff9b6b37973d60 R08: 0000000000000001 R09: ffff9b6b3caa4180 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 00007f10ed1f2700(0000) GS:ffff9b6b3ea00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000007b6f3000 CR4: 00000000000006f0 Call Trace: pkcs7_parse_message+0xee/0x240 crypto/asymmetric_keys/pkcs7_parser.c:139 verify_pkcs7_signature+0x33/0x180 certs/system_keyring.c:216 pkcs7_preparse+0x41/0x70 crypto/asymmetric_keys/pkcs7_key_type.c:63 key_create_or_update+0x180/0x530 security/keys/key.c:855 SYSC_add_key security/keys/keyctl.c:122 [inline] SyS_add_key+0xbf/0x250 security/keys/keyctl.c:62 entry_SYSCALL_64_fastpath+0x1f/0xbe RIP: 0033:0x4585c9 RSP: 002b:00007f10ed1f1bd8 EFLAGS: 00000216 ORIG_RAX: 00000000000000f8 RAX: ffffffffffffffda RBX: 00007f10ed1f2700 RCX: 00000000004585c9 RDX: 0000000020000000 RSI: 0000000020008ffb RDI: 0000000020008000 RBP: 0000000000000000 R08: ffffffffffffffff R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000216 R12: 00007fff1b2260ae R13: 00007fff1b2260af R14: 00007f10ed1f2700 R15: 0000000000000000 Code: dd ca ff 48 8b 45 88 48 83 e8 01 4c 39 f0 0f 86 a8 07 00 00 e8 53 dd ca ff 49 8d 46 01 48 89 85 58 ff ff ff 48 8b 85 60 ff ff ff <42> 0f b6 0c 30 89 c8 88 8d 75 ff ff ff 83 e0 1f 89 8d 28 ff ff RIP: asn1_ber_decoder+0x17f/0xe60 lib/asn1_decoder.c:233 RSP: ffff9b6b37973c78 CR2: 0000000000000000 Fixes: 42d5ec27f873 ("X.509: Add an ASN.1 decoder") Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha256-mb - fix panic due to unaligned accessAndrey Ryabinin1-6/+6
commit 5dfeaac15f2b1abb5a53c9146041c7235eb9aa04 upstream. struct sha256_ctx_mgr allocated in sha256_mb_mod_init() via kzalloc() and later passed in sha256_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha256_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: a377c6b1876e ("crypto: sha256-mb - submit/flush routines for AVX2") Reported-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Tim Chen Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: x86/sha1-mb - fix panic due to unaligned accessAndrey Ryabinin1-6/+6
commit d041b557792c85677f17e08eee535eafbd6b9aa2 upstream. struct sha1_ctx_mgr allocated in sha1_mb_mod_init() via kzalloc() and later passed in sha1_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha1_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault. Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements. Fixes: 2249cbb53ead ("crypto: sha-mb - SHA1 multibuffer submit and flush routines for AVX2") Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15crypto: ccm - preserve the IV bufferRomain Izard1-1/+3
commit 441f99c90497e15aa3ad1dbabd56187e29614348 upstream. The IV buffer used during CCM operations is used twice, during both the hashing step and the ciphering step. When using a hardware accelerator that updates the contents of the IV buffer at the end of ciphering operations, the value will be modified. In the decryption case, the subsequent setup of the hashing algorithm will interpret the updated IV instead of the original value, which can lead to out-of-bounds writes. Reuse the idata buffer, only used in the hashing step, to preserve the IV's value during the ciphering step in the decryption case. Signed-off-by: Romain Izard <romain.izard.pro@gmail.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15workqueue: Fix NULL pointer dereferenceLi Bin1-1/+2
commit cef572ad9bd7f85035ba8272e5352040e8be0152 upstream. When queue_work() is used in irq (not in task context), there is a potential case that trigger NULL pointer dereference. ---------------------------------------------------------------- worker_thread() |-spin_lock_irq() |-process_one_work() |-worker->current_pwq = pwq |-spin_unlock_irq() |-worker->current_func(work) |-spin_lock_irq() |-worker->current_pwq = NULL |-spin_unlock_irq() //interrupt here |-irq_handler |-__queue_work() //assuming that the wq is draining |-is_chained_work(wq) |-current_wq_worker() //Here, 'current' is the interrupted worker! |-current->current_pwq is NULL here! |-schedule() ---------------------------------------------------------------- Avoid it by checking for task context in current_wq_worker(), and if not in task context, we shouldn't use the 'current' to check the condition. Reported-by: Xiaofei Tan <tanxiaofei@huawei.com> Signed-off-by: Li Bin <huawei.libin@huawei.com> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: 8d03ecfe4718 ("workqueue: reimplement is_chained_work() using current_wq_worker()") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15netfilter: nft_set_hash: disable fast_ops for 2-len keysAnatole Denis1-1/+0
commit 0414c78f14861cb704d6e6888efd53dd36e3bdde upstream. jhash_1word of a u16 is a different value from jhash of the same u16 with length 2. Since elements are always inserted in sets using jhash over the actual klen, this would lead to incorrect lookups on fixed-size sets with a key length of 2, as they would be inserted with hash value jhash(key, 2) and looked up with hash value jhash_1word(key), which is different. Example reproducer(v4.13+), using anonymous sets which always have a fixed size: table inet t { chain c { type filter hook output priority 0; policy accept; tcp dport { 10001, 10003, 10005, 10007, 10009 } counter packets 4 bytes 240 reject tcp dport 10001 counter packets 4 bytes 240 reject tcp dport 10003 counter packets 4 bytes 240 reject tcp dport 10005 counter packets 4 bytes 240 reject tcp dport 10007 counter packets 0 bytes 0 reject tcp dport 10009 counter packets 4 bytes 240 reject } } then use nc -z localhost <port> to probe; incorrectly hashed ports will pass through the set lookup and increment the counter of an individual rule. jhash being seeded with a random value, it is not deterministic which ports will incorrectly hash, but in testing with 5 ports in the set I always had 4 or 5 with an incorrect hash value. Signed-off-by: Anatole Denis <anatole@rezel.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-15netfilter: nat: Revert "netfilter: nat: convert nat bysrc hash to rhashtable"Florian Westphal3-80/+54
commit e1bf1687740ce1a3598a1c5e452b852ff2190682 upstream. This reverts commit 870190a9ec9075205c0fa795a09fa931694a3ff1. It was not a good idea. The custom hash table was a much better fit for this purpose. A fast lookup is not essential, in fact for most cases there is no lookup at all because original tuple is not taken and can be used as-is. What needs to be fast is insertion and deletion. rhlist removal however requires a rhlist walk. We can have thousands of entries in such a list if source port/addresses are reused for multiple flows, if this happens removal requests are so expensive that deletions of a few thousand flows can take several seconds(!). The advantages that we got from rhashtable are: 1) table auto-sizing 2) multiple locks 1) would be nice to have, but it is not essential as we have at most one lookup per new flow, so even a million flows in the bysource table are not a problem compared to current deletion cost. 2) is easy to add to custom hash table. I tried to add hlist_node to rhlist to speed up rhltable_remove but this isn't doable without changing semantics. rhltable_remove_fast will check that the to-be-deleted object is part of the table and that requires a list walk that we want to avoid. Furthermore, using hlist_node increases size of struct rhlist_head, which in turn increases nf_conn size. Link: https://bugzilla.kernel.org/show_bug.cgi?id=196821 Reported-by: Ivan Babrou <ibobrik@gmail.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08Linux 4.13.12v4.13.12Greg Kroah-Hartman1-1/+1
2017-11-08irqchip/irq-mvebu-gicp: Add missing spin_lock initAntoine Tenart1-0/+1
commit c9bb86338a6bb91e4d32db04feb6b8d423e04d06 upstream. A spin lock is used in the irq-mvebu-gicp driver, but it is never initialized. This patch adds the missing spin_lock_init() call in the driver's probe function. Fixes: a68a63cb4dfc ("irqchip/irq-mvebu-gicp: Add new driver for Marvell GICP") Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: gregory.clement@free-electrons.com Acked-by: marc.zyngier@arm.com Cc: thomas.petazzoni@free-electrons.com Cc: andrew@lunn.ch Cc: jason@lakedaemon.net Cc: nadavh@marvell.com Cc: miquel.raynal@free-electrons.com Cc: linux-arm-kernel@lists.infradead.org Cc: sebastian.hesselbarth@gmail.com Link: https://lkml.kernel.org/r/20171025072326.21030-1-antoine.tenart@free-electrons.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08x86/mcelog: Get rid of RCU remnantsBorislav Petkov1-94/+27
commit 7298f08ea8870d44d36c7d6cd07dd0303faef6c2 upstream. Jeremy reported a suspicious RCU usage warning in mcelog. /dev/mcelog is called in process context now as part of the notifier chain and doesn't need any of the fancy RCU and lockless accesses which it did in atomic context. Axe it all in favor of a simple mutex synchronization which cures the problem reported. Fixes: 5de97c9f6d85 ("x86/mce: Factor out and deprecate the /dev/mcelog driver") Reported-by: Jeremy Cline <jcline@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-and-tested-by: Tony Luck <tony.luck@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: linux-edac@vger.kernel.org Cc: Laura Abbott <labbott@redhat.com> Link: https://lkml.kernel.org/r/20171101164754.xzzmskl4ngrqc5br@pd.tnic Link: https://bugzilla.redhat.com/show_bug.cgi?id=1498969 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08perf/cgroup: Fix perf cgroup hierarchy supportTejun Heo1-2/+4
commit be96b316deff35e119760982c43af74e606fa143 upstream. The following commit: 864c2357ca89 ("perf/core: Do not set cpuctx->cgrp for unscheduled cgroups") made list_update_cgroup_event() skip setting cpuctx->cgrp if no cgroup event targets %current's cgroup. This breaks perf_event's hierarchical support because events which target one of the ancestors get ignored. Fix it by using cgroup_is_descendant() test instead of equality. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: David Carrillo-Cisneros <davidcc@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: kernel-team@fb.com Fixes: 864c2357ca89 ("perf/core: Do not set cpuctx->cgrp for unscheduled cgroups") Link: http://lkml.kernel.org/r/20171028164237.GA972780@devbig577.frc2.facebook.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08futex: Fix more put_pi_state() vs. exit_pi_state_list() racesPeter Zijlstra1-3/+20
commit 153fbd1226fb30b8630802aa5047b8af5ef53c9f upstream. Dmitry (through syzbot) reported being able to trigger the WARN in get_pi_state() and a use-after-free on: raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); Both are due to this race: exit_pi_state_list() put_pi_state() lock(&curr->pi_lock) while() { pi_state = list_first_entry(head); hb = hash_futex(&pi_state->key); unlock(&curr->pi_lock); dec_and_test(&pi_state->refcount); lock(&hb->lock) lock(&pi_state->pi_mutex.wait_lock) // uaf if pi_state free'd lock(&curr->pi_lock); .... unlock(&curr->pi_lock); get_pi_state(); // WARN; refcount==0 The problem is we take the reference count too late, and don't allow it being 0. Fix it by using inc_not_zero() and simply retrying the loop when we fail to get a refcount. In that case put_pi_state() should remove the entry from the list. Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Gratian Crisan <gratian.crisan@ni.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: dvhart@infradead.org Cc: syzbot <bot+2af19c9e1ffe4d4ee1d16c56ae7580feaee75765@syzkaller.appspotmail.com> Cc: syzkaller-bugs@googlegroups.com Fixes: c74aef2d06a9 ("futex: Fix pi_state->owner serialization") Link: http://lkml.kernel.org/r/20171031101853.xpfh72y643kdfhjs@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08powerpc/kprobes: Dereference function pointers only if the address does not ↵Naveen N. Rao1-1/+6
belong to kernel text commit e6c4dcb308160115287afd87afb63b5684d75a5b upstream. This makes the changes introduced in commit 83e840c770f2c5 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") to be specific to the kprobe subsystem. We previously changed ppc_function_entry() to always check the provided address to confirm if it needed to be dereferenced. This is actually only an issue for kprobe blacklisted asm labels (through use of _ASM_NOKPROBE_SYMBOL) and can cause other issues with ftrace. Also, the additional checks are not really necessary for our other uses. As such, move this check to the kprobes subsystem. Fixes: 83e840c770f2 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08x86: CPU: Fix up "cpu MHz" in /proc/cpuinfoRafael J. Wysocki3-6/+11
commit 941f5f0f6ef5338814145cf2b813cf1f98873e2f upstream. Commit 890da9cf0983 (Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz"") is not sufficient to restore the previous behavior of "cpu MHz" in /proc/cpuinfo on x86 due to some changes made after the commit it has reverted. To address this, make the code in question use arch_freq_get_on_cpu() which also is used by cpufreq for reporting the current frequency of CPUs and since that function doesn't really depend on cpufreq in any way, drop the CONFIG_CPU_FREQ dependency for the object file containing it. Also refactor arch_freq_get_on_cpu() somewhat to avoid IPIs and return cached values right away if it is called very often over a short time (to prevent user space from triggering IPI storms through it). Fixes: 890da9cf0983 (Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz"") Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz""Linus Torvalds1-2/+8
commit 890da9cf098364b11a7f7f5c22fa652531624d03 upstream. This reverts commit 51204e0639c49ada02fd823782ad673b6326d748. There wasn't really any good reason for it, and people are complaining (rightly) that it broke existing practice. Cc: Len Brown <len.brown@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: SMP: Fix deadlock & online raceMatt Redfearn1-6/+16
commit 9e8c399a88f0b87e41a894911475ed2a8f8dff9e upstream. Commit 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") effectively reverted commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online") and thus has reinstated the possibility of deadlock. The commit was based on testing of kernel v4.4, where the CPU hotplug core code issued a BUG() if the starting CPU is not marked online when the boot CPU returns from __cpu_up. The commit fixes this race (in v4.4), but re-introduces the deadlock situation. As noted in the commit message, upstream differs in this area. Commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up") adds a completion event in the CPU hotplug core code, making this race impossible. However, people were unhappy with relying on the core code to do the right thing. To address the issues both commits were trying to fix, add a second completion event in the MIPS smp hotplug path. It removes the possibility of a race, since the MIPS smp hotplug code now synchronises both the boot and secondary CPUs before they return to the hotplug core code. It also addresses the deadlock by ensuring that the secondary CPU is not marked online before it's counters are synchronised. This fix should also be backported to fix the race condition introduced by the backport of commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of deadlock when bringing CPUs online"), through really that race only existed before commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up"). Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Fixes: 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask") CC: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Patchwork: https://patchwork.linux-mips.org/patch/17376/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: microMIPS: Fix incorrect mask in insn_table_MMGustavo A. R. Silva1-1/+1
commit 77238e76b9156d28d86c1e31c00ed2960df0e4de upstream. It seems that this is a typo error and the proper bit masking is "RT | RS" instead of "RS | RS". This issue was detected with the help of Coccinelle. Fixes: d6b3314b49e1 ("MIPS: uasm: Add lh uam instruction") Reported-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> Reviewed-by: James Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/17551/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: smp-cmp: Use right include for task_structJason A. Donenfeld1-1/+1
commit f677b77050c144bd4c515b91ea48bd0efe82355e upstream. When task_struct was moved, this MIPS code was neglected. Evidently nobody is using it anymore. This fixes this build error: In file included from ./arch/mips/include/asm/thread_info.h:15:0, from ./include/linux/thread_info.h:37, from ./include/asm-generic/current.h:4, from ./arch/mips/include/generated/asm/current.h:1, from ./include/linux/sched.h:11, from arch/mips/kernel/smp-cmp.c:22: arch/mips/kernel/smp-cmp.c: In function ‘cmp_boot_secondary’: ./arch/mips/include/asm/processor.h:384:41: error: implicit declaration of function ‘task_stack_page’ [-Werror=implicit-function-declaration] #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \ ^ arch/mips/kernel/smp-cmp.c:84:21: note: in expansion of macro ‘__KSTK_TOS’ unsigned long sp = __KSTK_TOS(idle); ^~~~~~~~~~ Fixes: f3ac60671954 ("sched/headers: Move task-stack related APIs from <linux/sched.h> to <linux/sched/task_stack.h>") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Patchwork: https://patchwork.linux-mips.org/patch/17522/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08MIPS: bpf: Fix a typo in build_one_insn()Wei Yongjun1-1/+1
commit 6a2932a463d526e362a6b4e112be226f1d18d088 upstream. Fix a typo in build_one_insn(). Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Patchwork: https://patchwork.linux-mips.org/patch/17491/ Signed-off-by: James Hogan <jhogan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08Revert "powerpc64/elfv1: Only dereference function descriptor for non-text ↵Naveen N. Rao1-9/+1
symbols" commit 63be1a81e40733ecd175713b6a7558dc43f00851 upstream. This reverts commit 83e840c770f2c5 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols"). Chandan reported that on newer kernels, trying to enable function_graph tracer on ppc64 (BE) locks up the system with the following trace: Unable to handle kernel paging request for data at address 0x600000002fa30010 Faulting instruction address: 0xc0000000001f1300 Thread overran stack, or stack corrupted Oops: Kernel access of bad area, sig: 11 [#1] BE SMP NR_CPUS=2048 DEBUG_PAGEALLOC NUMA pSeries Modules linked in: CPU: 1 PID: 6586 Comm: bash Not tainted 4.14.0-rc3-00162-g6e51f1f-dirty #20 task: c000000625c07200 task.stack: c000000625c07310 NIP: c0000000001f1300 LR: c000000000121cac CTR: c000000000061af8 REGS: c000000625c088c0 TRAP: 0380 Not tainted (4.14.0-rc3-00162-g6e51f1f-dirty) MSR: 8000000000001032 <SF,ME,IR,DR,RI> CR: 28002848 XER: 00000000 CFAR: c0000000001f1320 SOFTE: 0 ... NIP [c0000000001f1300] .__is_insn_slot_addr+0x30/0x90 LR [c000000000121cac] .kernel_text_address+0x18c/0x1c0 Call Trace: [c000000625c08b40] [c0000000001bd040] .is_module_text_address+0x20/0x40 (unreliable) [c000000625c08bc0] [c000000000121cac] .kernel_text_address+0x18c/0x1c0 [c000000625c08c50] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c08cf0] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c08d60] [c000000000121b40] .kernel_text_address+0x20/0x1c0 [c000000625c08df0] [c000000000061960] .prepare_ftrace_return+0x50/0x130 ... [c000000625c0ab30] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c0abd0] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c0ac40] [c000000000121b40] .kernel_text_address+0x20/0x1c0 [c000000625c0acd0] [c000000000061960] .prepare_ftrace_return+0x50/0x130 [c000000625c0ad70] [c000000000061b10] .ftrace_graph_caller+0x14/0x34 [c000000625c0ade0] [c000000000121b40] .kernel_text_address+0x20/0x1c0 This is because ftrace is using ppc_function_entry() for obtaining the address of return_to_handler() in prepare_ftrace_return(). The call to kernel_text_address() itself gets traced and we end up in a recursive loop. Fixes: 83e840c770f2 ("powerpc64/elfv1: Only dereference function descriptor for non-text symbols") Reported-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08drm/i915/edp: read edp display control registers unconditionallyJani Nikula1-3/+10
commit 7c838e2a9be5ab79b11c7f1520813bfdf0f45462 upstream. Per my reading of the eDP spec, DP_DPCD_DISPLAY_CONTROL_CAPABLE bit in DP_EDP_CONFIGURATION_CAP should be set if the eDP display control registers starting at offset DP_EDP_DPCD_REV are "enabled". Currently we check the bit before reading the registers, and DP_EDP_DPCD_REV is the only way to detect eDP revision. Turns out there are (likely buggy) displays that require eDP 1.4+ features, such as supported link rates and link rate select, but do not have the bit set. Read the display control registers unconditionally. They are supposed to read zero anyway if they are not supported, so there should be no harm in this. This fixes the referenced bug by enabling the eDP version check, and thus reading of the supported link rates. The panel in question has 0 in DP_MAX_LINK_RATE which is only supported in eDP 1.4+. Without the supported link rates method we default to RBR which is insufficient for the panel native mode. As a curiosity, the panel also has a bogus value of 0x12 in DP_EDP_DPCD_REV, but that passes our check for >= DP_EDP_14 (which is 0x03). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103400 Reported-and-tested-by: Nicolas P. <issun.artiste@gmail.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Manasi Navare <manasi.d.navare@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171026142932.17737-1-jani.nikula@intel.com (cherry picked from commit 0501a3b0eb01ac2209ef6fce76153e5d6b07034e) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08drm/i915: Do not rely on wm preservation for ILK watermarksMaarten Lankhorst2-31/+21
commit 8777b927b92cf5b6c29f9f9d3c737addea9ac8a7 upstream. The original intent was to preserve watermarks as much as possible in intel_pipe_wm.raw_wm, and put the validated ones in intel_pipe_wm.wm. It seems this approach is insufficient and we don't always preserve the raw watermarks, so just use the atomic iterator we're already using to get a const pointer to all bound planes on the crtc. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102373 Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171019151341.4579-1-maarten.lankhorst@linux.intel.com (cherry picked from commit 28283f4f359cd7cfa9e65457bb98c507a2cd0cd0) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08mm, swap: fix race between swap count continuation operationsHuang Ying2-6/+21
commit 2628bd6fc052bd85e9864dae4de494d8a6313391 upstream. One page may store a set of entries of the sis->swap_map (swap_info_struct->swap_map) in multiple swap clusters. If some of the entries has sis->swap_map[offset] > SWAP_MAP_MAX, multiple pages will be used to store the set of entries of the sis->swap_map. And the pages are linked with page->lru. This is called swap count continuation. To access the pages which store the set of entries of the sis->swap_map simultaneously, previously, sis->lock is used. But to improve the scalability of __swap_duplicate(), swap cluster lock may be used in swap_count_continued() now. This may race with add_swap_count_continuation() which operates on a nearby swap cluster, in which the sis->swap_map entries are stored in the same page. The race can cause wrong swap count in practice, thus cause unfreeable swap entries or software lockup, etc. To fix the race, a new spin lock called cont_lock is added to struct swap_info_struct to protect the swap count continuation page list. This is a lock at the swap device level, so the scalability isn't very well. But it is still much better than the original sis->lock, because it is only acquired/released when swap count continuation is used. Which is considered rare in practice. If it turns out that the scalability becomes an issue for some workloads, we can split the lock into some more fine grained locks. Link: http://lkml.kernel.org/r/20171017081320.28133-1-ying.huang@intel.com Fixes: 235b62176712 ("mm/swap: add cluster lock") Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Shaohua Li <shli@kernel.org> Cc: Tim Chen <tim.c.chen@intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Aaron Lu <aaron.lu@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08fs/hugetlbfs/inode.c: fix hwpoison reserve accountingMike Kravetz1-1/+4
commit ab615a5b879292e83653be60aa82113f7c6f462d upstream. Calling madvise(MADV_HWPOISON) on a hugetlbfs page will result in bad (negative) reserved huge page counts. This may not happen immediately, but may happen later when the underlying file is removed or filesystem unmounted. For example: AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 1 HugePages_Free: 0 HugePages_Rsvd: 18446744073709551615 HugePages_Surp: 0 Hugepagesize: 2048 kB In routine hugetlbfs_error_remove_page(), hugetlb_fix_reserve_counts is called after remove_huge_page. hugetlb_fix_reserve_counts is designed to only be called/used only if a failure is returned from hugetlb_unreserve_pages. Therefore, call hugetlb_unreserve_pages as required and only call hugetlb_fix_reserve_counts in the unlikely event that hugetlb_unreserve_pages returns an error. Link: http://lkml.kernel.org/r/20171019230007.17043-2-mike.kravetz@oracle.com Fixes: 78bb920344b8 ("mm: hwpoison: dissolve in-use hugepage in unrecoverable memory error") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ocfs2: fstrim: Fix start offset of first cluster group during fstrimAshish Samant1-6/+18
commit 105ddc93f06ebe3e553f58563d11ed63dbcd59f0 upstream. The first cluster group descriptor is not stored at the start of the group but at an offset from the start. We need to take this into account while doing fstrim on the first cluster group. Otherwise we will wrongly start fstrim a few blocks after the desired start block and the range can cross over into the next cluster group and zero out the group descriptor there. This can cause filesytem corruption that cannot be fixed by fsck. Link: http://lkml.kernel.org/r/1507835579-7308-1-git-send-email-ashish.samant@oracle.com Signed-off-by: Ashish Samant <ashish.samant@oracle.com> Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com> Reviewed-by: Joseph Qi <jiangqi903@gmail.com> Cc: Mark Fasheh <mfasheh@versity.com> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08userfaultfd: hugetlbfs: prevent UFFDIO_COPY to fill beyond the end of i_sizeAndrea Arcangeli1-2/+30
commit 1e3921471354244f70fe268586ff94a97a6dd4df upstream. This oops: kernel BUG at fs/hugetlbfs/inode.c:484! RIP: remove_inode_hugepages+0x3d0/0x410 Call Trace: hugetlbfs_setattr+0xd9/0x130 notify_change+0x292/0x410 do_truncate+0x65/0xa0 do_sys_ftruncate.constprop.3+0x11a/0x180 SyS_ftruncate+0xe/0x10 tracesys+0xd9/0xde was caused by the lack of i_size check in hugetlb_mcopy_atomic_pte. mmap() can still succeed beyond the end of the i_size after vmtruncate zapped vmas in those ranges, but the faults must not succeed, and that includes UFFDIO_COPY. We could differentiate the retval to userland to represent a SIGBUS like a page fault would do (vs SIGSEGV), but it doesn't seem very useful and we'd need to pick a random retval as there's no meaningful syscall retval that would differentiate from SIGSEGV and SIGBUS, there's just -EFAULT. Link: http://lkml.kernel.org/r/20171016223914.2421-2-aarcange@redhat.com Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08drm/amdgpu: allow harvesting check for Polaris VCELeo Liu1-6/+6
commit 32bec2afa525149288e6696079bc85f747fa2138 upstream. Fixes init failures on Polaris cards with harvested VCE blocks. Signed-off-by: Leo Liu <leo.liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08drm/amdgpu: return -ENOENT from uvd 6.0 early init for harvestingLeo Liu1-0/+4
commit cb4b02d7cac56a69d8137d8d843507cca9182aed upstream. Fixes init failures on polaris cards with harvested UVD. Signed-off-by: Leo Liu <leo.liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: 8715/1: add a private asm/unaligned.hArnd Bergmann2-1/+27
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream. The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ARM: dts: mvebu: pl310-cache disable double-linefillYan Markman3-6/+6
commit cda80a82ac3e89309706c027ada6ab232be1d640 upstream. Under heavy system stress mvebu SoC using Cortex A9 sporadically encountered instability issues. The "double linefill" feature of L2 cache was identified as causing dependency between read and write which lead to the deadlock. Especially, it was the cause of deadlock seen under heavy PCIe traffic, as this dependency violates PCIE overtaking rule. Fixes: c8f5a878e554 ("ARM: mvebu: use DT properties to fine-tune the L2 configuration") Signed-off-by: Yan Markman <ymarkman@marvell.com> Signed-off-by: Igal Liberman <igall@marvell.com> Signed-off-by: Nadav Haklai <nadavh@marvell.com> [gregory.clement@free-electrons.com: reformulate commit log, add Armada 375 and add Fixes tag] Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: kvm: Disable branch profiling in HYP codeJulien Thierry2-2/+2
commit f9b269f3098121b5d54aaf822e0898c8ed1d3fec upstream. When HYP code runs into branch profiling code, it attempts to jump to unmapped memory, causing a HYP Panic. Disable the branch profiling for code designed to run at HYP mode. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm/arm64: KVM: set right LR register value for 32 bit guest when inject abortDongjiu Geng2-5/+17
commit fd6c8c206fc5d0717b0433b191de0715122f33bb upstream. When a exception is trapped to EL2, hardware uses ELR_ELx to hold the current fault instruction address. If KVM wants to inject a abort to 32 bit guest, it needs to set the LR register for the guest to emulate this abort happened in the guest. Because ARM32 architecture is pipelined execution, so the LR value has an offset to the fault instruction address. The offsets applied to Link value for exceptions as shown below, which should be added for the ARM32 link register(LR). Table taken from ARMv8 ARM DDI0487B-B, table G1-10: Exception Offset, for PE state of: A32 T32 Undefined Instruction +4 +2 Prefetch Abort +4 +4 Data Abort +8 +8 IRQ or FIQ +4 +4 [ Removed unused variables in inject_abt to avoid compile warnings. -- Christoffer ] Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Tested-by: Haibin Zhang <zhanghaibin7@huawei.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08KVM: arm64: its: Fix missing dynamic allocation check in scan_its_tableChristoffer Dall1-11/+7
commit 8c1a8a32438b95792bbd8719d1cd4fe36e9eba03 upstream. We currently allocate an entry dynamically, but we never check if the allocation actually succeeded. We actually don't need a dynamic allocation, because we know the maximum size of an ITS table entry, so we can simply use an allocation on the stack. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08arm64: ensure __dump_instr() checks addr_limitMark Rutland1-1/+1
commit 7a7003b1da010d2b0d1dc8bf21c10f5c73b389f1 upstream. It's possible for a user to deliberately trigger __dump_instr with a chosen kernel address. Let's avoid problems resulting from this by using get_user() rather than __get_user(), ensuring that we don't erroneously access kernel memory. Where we use __dump_instr() on kernel text, we already switch to KERNEL_DS, so this shouldn't adversely affect those cases. Fixes: 60ffc30d5652810d ("arm64: Exception handling") Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08virtio_blk: Fix an SG_IO regressionBart Van Assche1-0/+12
commit efea2abcb03215f2efadfe994ff7f652aaff196b upstream. Avoid that submitting an SG_IO ioctl triggers a kernel oops that is preceded by: usercopy: kernel memory overwrite attempt detected to (null) (<null>) (6 bytes) kernel BUG at mm/usercopy.c:72! Reported-by: Dann Frazier <dann.frazier@canonical.com> Fixes: commit ca18d6f769d2 ("block: Make most scsi_req_init() calls implicit") Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Dann Frazier <dann.frazier@canonical.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Moved virtblk_initialize_rq() inside CONFIG_VIRTIO_BLK_SCSI. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-08ASoC: adau17x1: Workaround for noise bug in ADCRicard Wanderlof2-1/+25
commit 1e6f4fc06f6411adf98bbbe7fcd79442cd2b2a75 upstream. The ADC in the ADAU1361 (and possibly other Analog Devices codecs) exhibits a cyclic variation in the noise floor (in our test setup between -87 and -93 dB), a new value being attained within this range whenever a new capture stream is started. The cycle repeats after about 10 or 11 restarts. The workaround recommended by the manufacturer is to toggle the ADOSR bit in the Converter Control 0 register each time a new capture stream is started. I have verified that the patch fixes this problem on the ADAU1361, and according to the manufacturer toggling the bit in question in this manner will at least have no detrimental effect on other chips served by this driver. Signed-off-by: Ricard Wanderlof <ricardw@axis.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08KEYS: fix out-of-bounds read during ASN.1 parsingEric Biggers1-0/+3
commit 2eb9eabf1e868fda15808954fb29b0f105ed65f1 upstream. syzkaller with KASAN reported an out-of-bounds read in asn1_ber_decoder(). It can be reproduced by the following command, assuming CONFIG_X509_CERTIFICATE_PARSER=y and CONFIG_KASAN=y: keyctl add asymmetric desc $'\x30\x30' @s The bug is that the length of an ASN.1 data value isn't validated in the case where it is encoded using the short form, causing the decoder to read past the end of the input buffer. Fix it by validating the length. The bug report was: BUG: KASAN: slab-out-of-bounds in asn1_ber_decoder+0x10cb/0x1730 lib/asn1_decoder.c:233 Read of size 1 at addr ffff88003cccfa02 by task syz-executor0/6818 CPU: 1 PID: 6818 Comm: syz-executor0 Not tainted 4.14.0-rc7-00008-g5f479447d983 #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:16 [inline] dump_stack+0xb3/0x10b lib/dump_stack.c:52 print_address_description+0x79/0x2a0 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x236/0x340 mm/kasan/report.c:409 __asan_report_load1_noabort+0x14/0x20 mm/kasan/report.c:427 asn1_ber_decoder+0x10cb/0x1730 lib/asn1_decoder.c:233 x509_cert_parse+0x1db/0x650 crypto/asymmetric_keys/x509_cert_parser.c:89 x509_key_preparse+0x64/0x7a0 crypto/asymmetric_keys/x509_public_key.c:174 asymmetric_key_preparse+0xcb/0x1a0 crypto/asymmetric_keys/asymmetric_type.c:388 key_create_or_update+0x347/0xb20 security/keys/key.c:855 SYSC_add_key security/keys/keyctl.c:122 [inline] SyS_add_key+0x1cd/0x340 security/keys/keyctl.c:62 entry_SYSCALL_64_fastpath+0x1f/0xbe RIP: 0033:0x447c89 RSP: 002b:00007fca7a5d3bd8 EFLAGS: 00000246 ORIG_RAX: 00000000000000f8 RAX: ffffffffffffffda RBX: 00007fca7a5d46cc RCX: 0000000000447c89 RDX: 0000000020006f4a RSI: 0000000020006000 RDI: 0000000020001ff5 RBP: 0000000000000046 R08: fffffffffffffffd R09: 0000000000000000 R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007fca7a5d49c0 R15: 00007fca7a5d4700 Fixes: 42d5ec27f873 ("X.509: Add an ASN.1 decoder") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08KEYS: trusted: fix writing past end of buffer in trusted_read()Eric Biggers1-11/+12
commit a3c812f7cfd80cf51e8f5b7034f7418f6beb56c1 upstream. When calling keyctl_read() on a key of type "trusted", if the user-supplied buffer was too small, the kernel ignored the buffer length and just wrote past the end of the buffer, potentially corrupting userspace memory. Fix it by instead returning the size required, as per the documentation for keyctl_read(). We also don't even fill the buffer at all in this case, as this is slightly easier to implement than doing a short read, and either behavior appears to be permitted. It also makes it match the behavior of the "encrypted" key type. Fixes: d00a1c72f7f4 ("keys: add new trusted key-type") Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Reviewed-by: James Morris <james.l.morris@oracle.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08KEYS: return full count in keyring_read() if buffer is too smallEric Biggers1-20/+19
commit 3239b6f29bdfb4b0a2ba59df995fc9e6f4df7f1f upstream. Commit e645016abc80 ("KEYS: fix writing past end of user-supplied buffer in keyring_read()") made keyring_read() stop corrupting userspace memory when the user-supplied buffer is too small. However it also made the return value in that case be the short buffer size rather than the size required, yet keyctl_read() is actually documented to return the size required. Therefore, switch it over to the documented behavior. Note that for now we continue to have it fill the short buffer, since it did that before (pre-v3.13) and dump_key_tree_aux() in keyutils arguably relies on it. Fixes: e645016abc80 ("KEYS: fix writing past end of user-supplied buffer in keyring_read()") Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: James Morris <james.l.morris@oracle.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08cifs: check MaxPathNameComponentLength != 0 before using itRonnie Sahlberg1-2/+3
commit f74bc7c6679200a4a83156bb89cbf6c229fe8ec0 upstream. And fix tcon leak in error path. Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com> Reviewed-by: David Disseldorp <ddiss@samba.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ALSA: seq: Fix nested rwsem annotation for lockdep splatTakashi Iwai1-1/+1
commit 1f20f9ff57ca23b9f5502fca85ce3977e8496cb1 upstream. syzkaller reported the lockdep splat due to the possible deadlock of grp->list_mutex of each sequencer client object. Actually this is rather a false-positive report due to the missing nested lock annotations. The sequencer client may deliver the event directly to another client which takes another own lock. For addressing this issue, this patch replaces the simple down_read() with down_read_nested(). As a lock subclass, the already existing "hop" can be re-used, which indicates the depth of the call. Reference: http://lkml.kernel.org/r/089e082686ac9b482e055c832617@google.com Reported-by: syzbot <bot+7feb8de6b4d6bf810cf098bef942cc387e79d0ad@syzkaller.appspotmail.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-08ALSA: timer: Add missing mutex lock for compat ioctlsTakashi Iwai1-2/+15
commit 79fb0518fec8c8b4ea7f1729f54f293724b3dbb0 upstream. The races among ioctl and other operations were protected by the commit af368027a49a ("ALSA: timer: Fix race among timer ioctls") and later fixes, but one code path was forgotten in the scenario: the 32bit compat ioctl. As syzkaller recently spotted, a very similar use-after-free may happen with the combination of compat ioctls. The fix is simply to apply the same ioctl_lock to the compat_ioctl callback, too. Fixes: af368027a49a ("ALSA: timer: Fix race among timer ioctls") Reference: http://lkml.kernel.org/r/089e082686ac9b482e055c832617@google.com Reported-by: syzbot <bot+e5f3c9783e7048a74233054febbe9f1bdf54b6da@syzkaller.appspotmail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02Linux 4.13.11v4.13.11Greg Kroah-Hartman1-1/+1
2017-11-02powerpc/xive: Fix the size of the cpumask used in xive_find_target_in_mask()Cédric Le Goater1-1/+1
commit a9dadc1c512807f955f0799e85830b420da47932 upstream. When called from xive_irq_startup(), the size of the cpumask can be larger than nr_cpu_ids. This can result in a WARN_ON such as: WARNING: CPU: 10 PID: 1 at ../arch/powerpc/sysdev/xive/common.c:476 xive_find_target_in_mask+0x110/0x2f0 ... NIP [c00000000008a310] xive_find_target_in_mask+0x110/0x2f0 LR [c00000000008a2e4] xive_find_target_in_mask+0xe4/0x2f0 Call Trace: xive_find_target_in_mask+0x74/0x2f0 (unreliable) xive_pick_irq_target.isra.1+0x200/0x230 xive_irq_startup+0x60/0x180 irq_startup+0x70/0xd0 __setup_irq+0x7bc/0x880 request_threaded_irq+0x14c/0x2c0 request_event_sources_irqs+0x100/0x180 __machine_initcall_pseries_init_ras_IRQ+0x104/0x134 do_one_initcall+0x68/0x1d0 kernel_init_freeable+0x290/0x374 kernel_init+0x24/0x170 ret_from_kernel_thread+0x5c/0x74 This happens because we're being called with our affinity mask set to irq_default_affinity. That in turn was populated using cpumask_setall(), which sets NR_CPUs worth of bits, not nr_cpu_ids worth. Finally cpumask_weight() will return > nr_cpu_ids when passed a mask which has > nr_cpu_ids bits set. Fix it by limiting the value returned by cpumask_weight(). Signed-off-by: Cédric Le Goater <clg@kaod.org> [mpe: Add change log details on actual cause] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Cc: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>