diff --git a/Documentation/ABI/testing/sysfs-module b/Documentation/ABI/testing/sysfs-module index 08886367d0470..62addab47d0c5 100644 --- a/Documentation/ABI/testing/sysfs-module +++ b/Documentation/ABI/testing/sysfs-module @@ -60,3 +60,14 @@ Description: Module taint flags: C staging driver module E unsigned module == ===================== + +What: /sys/module/grant_table/parameters/free_per_iteration +Date: July 2023 +KernelVersion: 6.5 but backported to all supported stable branches +Contact: Xen developer discussion +Description: Read and write number of grant entries to attempt to free per iteration. + + Note: Future versions of Xen and Linux may provide a better + interface for controlling the rate of deferred grant reclaim + or may not need it at all. +Users: Qubes OS (https://www.qubes-os.org) diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst index 4d186f599d90f..32a8893e56177 100644 --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -484,11 +484,14 @@ Spectre variant 2 Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at boot, by setting the IBRS bit, and they're automatically protected against - Spectre v2 variant attacks, including cross-thread branch target injections - on SMT systems (STIBP). In other words, eIBRS enables STIBP too. + Spectre v2 variant attacks. - Legacy IBRS systems clear the IBRS bit on exit to userspace and - therefore explicitly enable STIBP for that + On Intel's enhanced IBRS systems, this includes cross-thread branch target + injections on SMT systems (STIBP). In other words, Intel eIBRS enables + STIBP, too. + + AMD Automatic IBRS does not protect userspace, and Legacy IBRS systems clear + the IBRS bit on exit to userspace, therefore both explicitly enable STIBP. The retpoline mitigation is turned on by default on vulnerable CPUs. It can be forced on or off by the administrator diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst index f18f46be5c0c7..2cd8fa332feb7 100644 --- a/Documentation/filesystems/tmpfs.rst +++ b/Documentation/filesystems/tmpfs.rst @@ -84,8 +84,6 @@ nr_inodes The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower. -noswap Disables swap. Remounts must respect the original settings. - By default swap is enabled. ========= ============================================================ These parameters accept a suffix k, m or g for kilo, mega and giga and @@ -99,36 +97,31 @@ mount with such options, since it allows any user with write access to use up all the memory on the machine; but enhances the scalability of that instance in a system with many CPUs making intensive use of it. +tmpfs blocks may be swapped out, when there is a shortage of memory. +tmpfs has a mount option to disable its use of swap: + +====== =========================================================== +noswap Disables swap. Remounts must respect the original settings. + By default swap is enabled. +====== =========================================================== + tmpfs also supports Transparent Huge Pages which requires a kernel configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for your system (has_transparent_hugepage(), which is architecture specific). The mount options for this are: -====== ============================================================ -huge=0 never: disables huge pages for the mount -huge=1 always: enables huge pages for the mount -huge=2 within_size: only allocate huge pages if the page will be - fully within i_size, also respect fadvise()/madvise() hints. -huge=3 advise: only allocate huge pages if requested with - fadvise()/madvise() -====== ============================================================ - -There is a sysfs file which you can also use to control system wide THP -configuration for all tmpfs mounts, the file is: - -/sys/kernel/mm/transparent_hugepage/shmem_enabled - -This sysfs file is placed on top of THP sysfs directory and so is registered -by THP code. It is however only used to control all tmpfs mounts with one -single knob. Since it controls all tmpfs mounts it should only be used either -for emergency or testing purposes. The values you can set for shmem_enabled are: - -== ============================================================ --1 deny: disables huge on shm_mnt and all mounts, for - emergency use --2 force: enables huge on shm_mnt and all mounts, w/o needing - option, for testing -== ============================================================ +================ ============================================================== +huge=never Do not allocate huge pages. This is the default. +huge=always Attempt to allocate huge page every time a new page is needed. +huge=within_size Only allocate huge page if it will be fully within i_size. + Also respect madvise(2) hints. +huge=advise Only allocate huge page if requested with madvise(2). +================ ============================================================== + +See also Documentation/admin-guide/mm/transhuge.rst, which describes the +sysfs file /sys/kernel/mm/transparent_hugepage/shmem_enabled: which can +be used to deny huge pages on all tmpfs mounts in an emergency, or to +force huge pages on all tmpfs mounts for testing. tmpfs has a mount option to set the NUMA memory allocation policy for all files in that instance (if CONFIG_NUMA is enabled) - which can be diff --git a/Documentation/process/security-bugs.rst b/Documentation/process/security-bugs.rst index 82e29837d5898..5a6993795bd26 100644 --- a/Documentation/process/security-bugs.rst +++ b/Documentation/process/security-bugs.rst @@ -63,31 +63,28 @@ information submitted to the security list and any followup discussions of the report are treated confidentially even after the embargo has been lifted, in perpetuity. -Coordination ------------- - -Fixes for sensitive bugs, such as those that might lead to privilege -escalations, may need to be coordinated with the private - mailing list so that distribution vendors -are well prepared to issue a fixed kernel upon public disclosure of the -upstream fix. Distros will need some time to test the proposed patch and -will generally request at least a few days of embargo, and vendor update -publication prefers to happen Tuesday through Thursday. When appropriate, -the security team can assist with this coordination, or the reporter can -include linux-distros from the start. In this case, remember to prefix -the email Subject line with "[vs]" as described in the linux-distros wiki: - +Coordination with other groups +------------------------------ + +The kernel security team strongly recommends that reporters of potential +security issues NEVER contact the "linux-distros" mailing list until +AFTER discussing it with the kernel security team. Do not Cc: both +lists at once. You may contact the linux-distros mailing list after a +fix has been agreed on and you fully understand the requirements that +doing so will impose on you and the kernel community. + +The different lists have different goals and the linux-distros rules do +not contribute to actually fixing any potential security problems. CVE assignment -------------- -The security team does not normally assign CVEs, nor do we require them -for reports or fixes, as this can needlessly complicate the process and -may delay the bug handling. If a reporter wishes to have a CVE identifier -assigned ahead of public disclosure, they will need to contact the private -linux-distros list, described above. When such a CVE identifier is known -before a patch is provided, it is desirable to mention it in the commit -message if the reporter agrees. +The security team does not assign CVEs, nor do we require them for +reports or fixes, as this can needlessly complicate the process and may +delay the bug handling. If a reporter wishes to have a CVE identifier +assigned, they should find one by themselves, for example by contacting +MITRE directly. However under no circumstances will a patch inclusion +be delayed to wait for a CVE identifier to arrive. Non-disclosure agreements ------------------------- diff --git a/Makefile b/Makefile index b3dc3b5f14cae..9607ce0b8a10c 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 6 PATCHLEVEL = 4 -SUBLEVEL = 7 +SUBLEVEL = 8 EXTRAVERSION = NAME = Hurr durr I'ma ninja sloth diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 4eb601e7de507..06382da630123 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -78,6 +78,7 @@ extern u32 __boot_cpu_mode[2]; void __hyp_set_vectors(phys_addr_t phys_vector_base); void __hyp_reset_vectors(void); +bool is_kvm_arm_initialised(void); DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 9d7d10d60bfdc..520b681a07bb0 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -917,6 +917,8 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, if (task == current) put_cpu_fpsimd_context(); + task_set_vl(task, type, vl); + /* * Free the changed states if they are not in use, SME will be * reallocated to the correct size on next use and we just @@ -931,8 +933,6 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type, if (free_sme) sme_free(task); - task_set_vl(task, type, vl); - out: update_tsk_thread_flag(task, vec_vl_inherit_flag(type), flags & PR_SVE_VL_INHERIT); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7d8c3dd8b7ca9..3a2606ba3e583 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -51,11 +51,16 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); -static bool vgic_present; +static bool vgic_present, kvm_arm_initialised; static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled); DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use); +bool is_kvm_arm_initialised(void) +{ + return kvm_arm_initialised; +} + int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) { return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; @@ -2396,6 +2401,8 @@ static __init int kvm_arm_init(void) if (err) goto out_subs; + kvm_arm_initialised = true; + return 0; out_subs: diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 6e9ece1ebbe72..3895416cb15ae 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -243,7 +243,7 @@ static int __init finalize_pkvm(void) { int ret; - if (!is_protected_kvm_enabled()) + if (!is_protected_kvm_enabled() || !is_kvm_arm_initialised()) return 0; /* diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 73519e13bbb39..2570e7c1eb75f 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -12,6 +12,7 @@ config LOONGARCH select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS + select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_INLINE_READ_LOCK if !PREEMPTION diff --git a/arch/loongarch/lib/clear_user.S b/arch/loongarch/lib/clear_user.S index fd1d62b244f2f..9dcf717193874 100644 --- a/arch/loongarch/lib/clear_user.S +++ b/arch/loongarch/lib/clear_user.S @@ -108,6 +108,7 @@ SYM_FUNC_START(__clear_user_fast) addi.d a3, a2, -8 bgeu a0, a3, .Llt8 15: st.d zero, a0, 0 + addi.d a0, a0, 8 .Llt8: 16: st.d zero, a2, -8 @@ -188,7 +189,7 @@ SYM_FUNC_START(__clear_user_fast) _asm_extable 13b, .L_fixup_handle_0 _asm_extable 14b, .L_fixup_handle_1 _asm_extable 15b, .L_fixup_handle_0 - _asm_extable 16b, .L_fixup_handle_1 + _asm_extable 16b, .L_fixup_handle_0 _asm_extable 17b, .L_fixup_handle_s0 _asm_extable 18b, .L_fixup_handle_s0 _asm_extable 19b, .L_fixup_handle_s0 diff --git a/arch/loongarch/lib/copy_user.S b/arch/loongarch/lib/copy_user.S index b21f6d5d38f51..fecd08cad702d 100644 --- a/arch/loongarch/lib/copy_user.S +++ b/arch/loongarch/lib/copy_user.S @@ -136,6 +136,7 @@ SYM_FUNC_START(__copy_user_fast) bgeu a1, a4, .Llt8 30: ld.d t0, a1, 0 31: st.d t0, a0, 0 + addi.d a0, a0, 8 .Llt8: 32: ld.d t0, a3, -8 @@ -246,7 +247,7 @@ SYM_FUNC_START(__copy_user_fast) _asm_extable 30b, .L_fixup_handle_0 _asm_extable 31b, .L_fixup_handle_0 _asm_extable 32b, .L_fixup_handle_0 - _asm_extable 33b, .L_fixup_handle_1 + _asm_extable 33b, .L_fixup_handle_0 _asm_extable 34b, .L_fixup_handle_s0 _asm_extable 35b, .L_fixup_handle_s0 _asm_extable 36b, .L_fixup_handle_s0 diff --git a/arch/loongarch/net/bpf_jit.h b/arch/loongarch/net/bpf_jit.h index c335dc4eed370..68586338ecf85 100644 --- a/arch/loongarch/net/bpf_jit.h +++ b/arch/loongarch/net/bpf_jit.h @@ -150,7 +150,7 @@ static inline void move_imm(struct jit_ctx *ctx, enum loongarch_gpr rd, long imm * no need to call lu32id to do a new filled operation. */ imm_51_31 = (imm >> 31) & 0x1fffff; - if (imm_51_31 != 0 || imm_51_31 != 0x1fffff) { + if (imm_51_31 != 0 && imm_51_31 != 0x1fffff) { /* lu32id rd, imm_51_32 */ imm_51_32 = (imm >> 32) & 0xfffff; emit_insn(ctx, lu32id, rd, imm_51_32); diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c index 9a44a98ba3420..3fbc2a6aa319d 100644 --- a/arch/powerpc/platforms/pseries/vas.c +++ b/arch/powerpc/platforms/pseries/vas.c @@ -744,6 +744,12 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds, } task_ref = &win->vas_win.task_ref; + /* + * VAS mmap (coproc_mmap()) and its fault handler + * (vas_mmap_fault()) are called after holding mmap lock. + * So hold mmap mutex after mmap_lock to avoid deadlock. + */ + mmap_write_lock(task_ref->mm); mutex_lock(&task_ref->mmap_mutex); vma = task_ref->vma; /* @@ -752,7 +758,6 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds, */ win->vas_win.status |= flag; - mmap_write_lock(task_ref->mm); /* * vma is set in the original mapping. But this mapping * is done with mmap() after the window is opened with ioctl. @@ -762,8 +767,8 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds, if (vma) zap_vma_pages(vma); - mmap_write_unlock(task_ref->mm); mutex_unlock(&task_ref->mmap_mutex); + mmap_write_unlock(task_ref->mm); /* * Close VAS window in the hypervisor, but do not * free vas_window struct since it may be reused diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index 3ce5f4351156a..899f3b8ac0110 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -411,8 +411,12 @@ int kvm_s390_pv_deinit_cleanup_all(struct kvm *kvm, u16 *rc, u16 *rrc) u16 _rc, _rrc; int cc = 0; - /* Make sure the counter does not reach 0 before calling s390_uv_destroy_range */ - atomic_inc(&kvm->mm->context.protected_count); + /* + * Nothing to do if the counter was already 0. Otherwise make sure + * the counter does not reach 0 before calling s390_uv_destroy_range. + */ + if (!atomic_inc_not_zero(&kvm->mm->context.protected_count)) + return 0; *rc = 1; /* If the current VM is protected, destroy it */ diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index dbe8394234e2b..2f123429a291b 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -421,6 +421,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + if (likely(!(fault & VM_FAULT_ERROR))) + fault = 0; goto out; } count_vm_vma_lock_event(VMA_LOCK_RETRY); diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index dc90d1eb0d554..d7e8297d5642b 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2846,6 +2846,7 @@ int s390_replace_asce(struct gmap *gmap) page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; + page->index = 0; table = page_to_virt(page); memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT)); diff --git a/arch/um/os-Linux/sigio.c b/arch/um/os-Linux/sigio.c index 37d60e72cf269..9e71794839e87 100644 --- a/arch/um/os-Linux/sigio.c +++ b/arch/um/os-Linux/sigio.c @@ -3,7 +3,6 @@ * Copyright (C) 2002 - 2008 Jeff Dike (jdike@{addtoit,linux.intel}.com) */ -#include #include #include #include @@ -51,7 +50,7 @@ static struct pollfds all_sigio_fds; static int write_sigio_thread(void *unused) { - struct pollfds *fds; + struct pollfds *fds, tmp; struct pollfd *p; int i, n, respond_fd; char c; @@ -78,7 +77,9 @@ static int write_sigio_thread(void *unused) "write_sigio_thread : " "read on socket failed, " "err = %d\n", errno); - swap(current_poll, next_poll); + tmp = current_poll; + current_poll = next_poll; + next_poll = tmp; respond_fd = sigio_private[1]; } else { diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 13bc212cd4bc7..e3054e3e46d52 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -37,6 +37,7 @@ KVM_X86_OP(get_segment) KVM_X86_OP(get_cpl) KVM_X86_OP(set_segment) KVM_X86_OP(get_cs_db_l_bits) +KVM_X86_OP(is_valid_cr0) KVM_X86_OP(set_cr0) KVM_X86_OP_OPTIONAL(post_set_cr3) KVM_X86_OP(is_valid_cr4) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fb9d1f2d6136c..938fe9572ae10 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1566,9 +1566,10 @@ struct kvm_x86_ops { void (*set_segment)(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); + bool (*is_valid_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); void (*post_set_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3); - bool (*is_valid_cr4)(struct kvm_vcpu *vcpu, unsigned long cr0); + bool (*is_valid_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); void (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 182af64387d06..dbf7443c42ebd 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1199,19 +1199,21 @@ spectre_v2_user_select_mitigation(void) } /* - * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP + * If no STIBP, Intel enhanced IBRS is enabled, or SMT impossible, STIBP * is not required. * - * Enhanced IBRS also protects against cross-thread branch target + * Intel's Enhanced IBRS also protects against cross-thread branch target * injection in user-mode as the IBRS bit remains always set which * implicitly enables cross-thread protections. However, in legacy IBRS * mode, the IBRS bit is set only on kernel entry and cleared on return - * to userspace. This disables the implicit cross-thread protection, - * so allow for STIBP to be selected in that case. + * to userspace. AMD Automatic IBRS also does not protect userspace. + * These modes therefore disable the implicit cross-thread protection, + * so allow for STIBP to be selected in those cases. */ if (!boot_cpu_has(X86_FEATURE_STIBP) || !smt_possible || - spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + (spectre_v2_in_eibrs_mode(spectre_v2_enabled) && + !boot_cpu_has(X86_FEATURE_AUTOIBRS))) return; /* @@ -2343,7 +2345,8 @@ static ssize_t mmio_stale_data_show_state(char *buf) static char *stibp_state(void) { - if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) && + !boot_cpu_has(X86_FEATURE_AUTOIBRS)) return ""; switch (spectre_v2_user_stibp) { diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 0b971f9740964..542fe6915ab78 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -1259,10 +1259,10 @@ static void __threshold_remove_blocks(struct threshold_bank *b) struct threshold_block *pos = NULL; struct threshold_block *tmp = NULL; - kobject_del(b->kobj); + kobject_put(b->kobj); list_for_each_entry_safe(pos, tmp, &b->blocks->miscj, miscj) - kobject_del(&pos->kobj); + kobject_put(b->kobj); } static void threshold_remove_bank(struct threshold_bank *bank) diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 58b1f208eff51..4a817d20ce3bb 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -697,9 +697,10 @@ static bool try_fixup_enqcmd_gp(void) } static bool gp_try_fixup_and_notify(struct pt_regs *regs, int trapnr, - unsigned long error_code, const char *str) + unsigned long error_code, const char *str, + unsigned long address) { - if (fixup_exception(regs, trapnr, error_code, 0)) + if (fixup_exception(regs, trapnr, error_code, address)) return true; current->thread.error_code = error_code; @@ -759,7 +760,7 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection) goto exit; } - if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc)) + if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc, 0)) goto exit; if (error_code) @@ -1357,17 +1358,20 @@ DEFINE_IDTENTRY(exc_device_not_available) #define VE_FAULT_STR "VE fault" -static void ve_raise_fault(struct pt_regs *regs, long error_code) +static void ve_raise_fault(struct pt_regs *regs, long error_code, + unsigned long address) { if (user_mode(regs)) { gp_user_force_sig_segv(regs, X86_TRAP_VE, error_code, VE_FAULT_STR); return; } - if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, VE_FAULT_STR)) + if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, + VE_FAULT_STR, address)) { return; + } - die_addr(VE_FAULT_STR, regs, error_code, 0); + die_addr(VE_FAULT_STR, regs, error_code, address); } /* @@ -1431,7 +1435,7 @@ DEFINE_IDTENTRY(exc_virtualization_exception) * it successfully, treat it as #GP(0) and handle it. */ if (!tdx_handle_virt_exception(regs, &ve)) - ve_raise_fault(regs, 0); + ve_raise_fault(regs, 0, ve.gla); cond_local_irq_disable(regs); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 54089f990c8f8..5a0c2d6791a0a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1799,6 +1799,11 @@ static void sev_post_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) } } +static bool svm_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) +{ + return true; +} + void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { struct vcpu_svm *svm = to_svm(vcpu); @@ -4838,6 +4843,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .set_segment = svm_set_segment, .get_cpl = svm_get_cpl, .get_cs_db_l_bits = svm_get_cs_db_l_bits, + .is_valid_cr0 = svm_is_valid_cr0, .set_cr0 = svm_set_cr0, .post_set_cr3 = sev_post_set_cr3, .is_valid_cr4 = svm_is_valid_cr4, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 44fb619803b89..40f54cbf3f333 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1503,6 +1503,11 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long old_rflags; + /* + * Unlike CR0 and CR4, RFLAGS handling requires checking if the vCPU + * is an unrestricted guest in order to mark L2 as needing emulation + * if L1 runs L2 as a restricted guest. + */ if (is_unrestricted_guest(vcpu)) { kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS); vmx->rflags = rflags; @@ -3040,6 +3045,15 @@ static void enter_rmode(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx = to_vmx(vcpu); struct kvm_vmx *kvm_vmx = to_kvm_vmx(vcpu->kvm); + /* + * KVM should never use VM86 to virtualize Real Mode when L2 is active, + * as using VM86 is unnecessary if unrestricted guest is enabled, and + * if unrestricted guest is disabled, VM-Enter (from L1) with CR0.PG=0 + * should VM-Fail and KVM should reject userspace attempts to stuff + * CR0.PG=0 when L2 is active. + */ + WARN_ON_ONCE(is_guest_mode(vcpu)); + vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR); vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES); vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS); @@ -3229,6 +3243,17 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu) #define CR3_EXITING_BITS (CPU_BASED_CR3_LOAD_EXITING | \ CPU_BASED_CR3_STORE_EXITING) +static bool vmx_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) +{ + if (is_guest_mode(vcpu)) + return nested_guest_cr0_valid(vcpu, cr0); + + if (to_vmx(vcpu)->nested.vmxon) + return nested_host_cr0_valid(vcpu, cr0); + + return true; +} + void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -3238,7 +3263,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) old_cr0_pg = kvm_read_cr0_bits(vcpu, X86_CR0_PG); hw_cr0 = (cr0 & ~KVM_VM_CR0_ALWAYS_OFF); - if (is_unrestricted_guest(vcpu)) + if (enable_unrestricted_guest) hw_cr0 |= KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST; else { hw_cr0 |= KVM_VM_CR0_ALWAYS_ON; @@ -3266,7 +3291,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) } #endif - if (enable_ept && !is_unrestricted_guest(vcpu)) { + if (enable_ept && !enable_unrestricted_guest) { /* * Ensure KVM has an up-to-date snapshot of the guest's CR3. If * the below code _enables_ CR3 exiting, vmx_cache_reg() will @@ -3397,7 +3422,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) unsigned long hw_cr4; hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE); - if (is_unrestricted_guest(vcpu)) + if (enable_unrestricted_guest) hw_cr4 |= KVM_VM_CR4_ALWAYS_ON_UNRESTRICTED_GUEST; else if (vmx->rmode.vm86_active) hw_cr4 |= KVM_RMODE_VM_CR4_ALWAYS_ON; @@ -3417,7 +3442,7 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) vcpu->arch.cr4 = cr4; kvm_register_mark_available(vcpu, VCPU_EXREG_CR4); - if (!is_unrestricted_guest(vcpu)) { + if (!enable_unrestricted_guest) { if (enable_ept) { if (!is_paging(vcpu)) { hw_cr4 &= ~X86_CR4_PAE; @@ -5367,18 +5392,11 @@ static int handle_set_cr0(struct kvm_vcpu *vcpu, unsigned long val) val = (val & ~vmcs12->cr0_guest_host_mask) | (vmcs12->guest_cr0 & vmcs12->cr0_guest_host_mask); - if (!nested_guest_cr0_valid(vcpu, val)) - return 1; - if (kvm_set_cr0(vcpu, val)) return 1; vmcs_writel(CR0_READ_SHADOW, orig_val); return 0; } else { - if (to_vmx(vcpu)->nested.vmxon && - !nested_host_cr0_valid(vcpu, val)) - return 1; - return kvm_set_cr0(vcpu, val); } } @@ -8160,6 +8178,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .set_segment = vmx_set_segment, .get_cpl = vmx_get_cpl, .get_cs_db_l_bits = vmx_get_cs_db_l_bits, + .is_valid_cr0 = vmx_is_valid_cr0, .set_cr0 = vmx_set_cr0, .is_valid_cr4 = vmx_is_valid_cr4, .set_cr4 = vmx_set_cr4, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 04b57a336b34e..f04bed5a5aff9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -906,6 +906,22 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3) } EXPORT_SYMBOL_GPL(load_pdptrs); +static bool kvm_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) +{ +#ifdef CONFIG_X86_64 + if (cr0 & 0xffffffff00000000UL) + return false; +#endif + + if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD)) + return false; + + if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) + return false; + + return static_call(kvm_x86_is_valid_cr0)(vcpu, cr0); +} + void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) { /* @@ -952,20 +968,13 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { unsigned long old_cr0 = kvm_read_cr0(vcpu); - cr0 |= X86_CR0_ET; - -#ifdef CONFIG_X86_64 - if (cr0 & 0xffffffff00000000UL) + if (!kvm_is_valid_cr0(vcpu, cr0)) return 1; -#endif - - cr0 &= ~CR0_RESERVED_BITS; - if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD)) - return 1; + cr0 |= X86_CR0_ET; - if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) - return 1; + /* Write to CR0 reserved bits are ignored, even on Intel. */ + cr0 &= ~CR0_RESERVED_BITS; #ifdef CONFIG_X86_64 if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) && @@ -11461,7 +11470,8 @@ static bool kvm_is_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) return false; } - return kvm_is_valid_cr4(vcpu, sregs->cr4); + return kvm_is_valid_cr4(vcpu, sregs->cr4) && + kvm_is_valid_cr0(vcpu, sregs->cr0); } static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs, diff --git a/block/blk-core.c b/block/blk-core.c index 3fc68b9444791..0434f5a8151fe 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1141,8 +1141,7 @@ void __blk_flush_plug(struct blk_plug *plug, bool from_schedule) { if (!list_empty(&plug->cb_list)) flush_plug_callbacks(plug, from_schedule); - if (!rq_list_empty(plug->mq_list)) - blk_mq_flush_plug_list(plug, from_schedule); + blk_mq_flush_plug_list(plug, from_schedule); /* * Unconditionally flush out cached requests, even if the unplug * event came from schedule. Since we know hold references to the diff --git a/block/blk-mq.c b/block/blk-mq.c index 73ed8ccb09ce8..58bf41e8e66c7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2754,7 +2754,14 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { struct request *rq; - if (rq_list_empty(plug->mq_list)) + /* + * We may have been called recursively midway through handling + * plug->mq_list via a schedule() in the driver's queue_rq() callback. + * To avoid mq_list changing under our feet, clear rq_count early and + * bail out specifically if rq_count is 0 rather than checking + * whether the mq_list is empty. + */ + if (plug->rq_count == 0) return; plug->rq_count = 0; diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 38fb84974f352..8a384e6cfa132 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -1006,9 +1006,6 @@ static void iort_node_get_rmr_info(struct acpi_iort_node *node, for (i = 0; i < node->mapping_count; i++, map++) { struct acpi_iort_node *parent; - if (!map->id_count) - continue; - parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, map->output_reference); if (parent != iommu) diff --git a/drivers/ata/pata_ns87415.c b/drivers/ata/pata_ns87415.c index d60e1f69d7b02..c697219a61a2d 100644 --- a/drivers/ata/pata_ns87415.c +++ b/drivers/ata/pata_ns87415.c @@ -260,7 +260,7 @@ static u8 ns87560_check_status(struct ata_port *ap) * LOCKING: * Inherited from caller. */ -void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf) +static void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf) { struct ata_ioports *ioaddr = &ap->ioaddr; diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h index 0eb7f02b3ad59..922ed457db191 100644 --- a/drivers/base/power/power.h +++ b/drivers/base/power/power.h @@ -29,6 +29,7 @@ extern u64 pm_runtime_active_time(struct device *dev); #define WAKE_IRQ_DEDICATED_MASK (WAKE_IRQ_DEDICATED_ALLOCATED | \ WAKE_IRQ_DEDICATED_MANAGED | \ WAKE_IRQ_DEDICATED_REVERSE) +#define WAKE_IRQ_DEDICATED_ENABLED BIT(3) struct wake_irq { struct device *dev; diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c index d487a6bac630f..afd094dec5ca3 100644 --- a/drivers/base/power/wakeirq.c +++ b/drivers/base/power/wakeirq.c @@ -314,8 +314,10 @@ void dev_pm_enable_wake_irq_check(struct device *dev, return; enable: - if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) + if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) { enable_irq(wirq->irq); + wirq->status |= WAKE_IRQ_DEDICATED_ENABLED; + } } /** @@ -336,8 +338,10 @@ void dev_pm_disable_wake_irq_check(struct device *dev, bool cond_disable) if (cond_disable && (wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) return; - if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED) + if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED) { + wirq->status &= ~WAKE_IRQ_DEDICATED_ENABLED; disable_irq_nosync(wirq->irq); + } } /** @@ -376,7 +380,7 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq) if (device_may_wakeup(wirq->dev)) { if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED && - !pm_runtime_status_suspended(wirq->dev)) + !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED)) enable_irq(wirq->irq); enable_irq_wake(wirq->irq); @@ -399,7 +403,7 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq) disable_irq_wake(wirq->irq); if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED && - !pm_runtime_status_suspended(wirq->dev)) + !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED)) disable_irq_nosync(wirq->irq); } } diff --git a/drivers/base/regmap/regmap-kunit.c b/drivers/base/regmap/regmap-kunit.c index f76d416881349..0b3dacc7fa424 100644 --- a/drivers/base/regmap/regmap-kunit.c +++ b/drivers/base/regmap/regmap-kunit.c @@ -58,6 +58,9 @@ static struct regmap *gen_regmap(struct regmap_config *config, int i; struct reg_default *defaults; + config->disable_locking = config->cache_type == REGCACHE_RBTREE || + config->cache_type == REGCACHE_MAPLE; + buf = kmalloc(size, GFP_KERNEL); if (!buf) return ERR_PTR(-ENOMEM); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 632751ddb2870..5c86151b0d3a5 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -3849,51 +3849,82 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result) list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list); } -static int get_lock_owner_info(struct rbd_device *rbd_dev, - struct ceph_locker **lockers, u32 *num_lockers) +static bool locker_equal(const struct ceph_locker *lhs, + const struct ceph_locker *rhs) +{ + return lhs->id.name.type == rhs->id.name.type && + lhs->id.name.num == rhs->id.name.num && + !strcmp(lhs->id.cookie, rhs->id.cookie) && + ceph_addr_equal_no_type(&lhs->info.addr, &rhs->info.addr); +} + +static void free_locker(struct ceph_locker *locker) +{ + if (locker) + ceph_free_lockers(locker, 1); +} + +static struct ceph_locker *get_lock_owner_info(struct rbd_device *rbd_dev) { struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc; + struct ceph_locker *lockers; + u32 num_lockers; u8 lock_type; char *lock_tag; + u64 handle; int ret; - dout("%s rbd_dev %p\n", __func__, rbd_dev); - ret = ceph_cls_lock_info(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, RBD_LOCK_NAME, - &lock_type, &lock_tag, lockers, num_lockers); - if (ret) - return ret; + &lock_type, &lock_tag, &lockers, &num_lockers); + if (ret) { + rbd_warn(rbd_dev, "failed to retrieve lockers: %d", ret); + return ERR_PTR(ret); + } - if (*num_lockers == 0) { + if (num_lockers == 0) { dout("%s rbd_dev %p no lockers detected\n", __func__, rbd_dev); + lockers = NULL; goto out; } if (strcmp(lock_tag, RBD_LOCK_TAG)) { rbd_warn(rbd_dev, "locked by external mechanism, tag %s", lock_tag); - ret = -EBUSY; - goto out; + goto err_busy; } - if (lock_type == CEPH_CLS_LOCK_SHARED) { - rbd_warn(rbd_dev, "shared lock type detected"); - ret = -EBUSY; - goto out; + if (lock_type != CEPH_CLS_LOCK_EXCLUSIVE) { + rbd_warn(rbd_dev, "incompatible lock type detected"); + goto err_busy; } - if (strncmp((*lockers)[0].id.cookie, RBD_LOCK_COOKIE_PREFIX, - strlen(RBD_LOCK_COOKIE_PREFIX))) { + WARN_ON(num_lockers != 1); + ret = sscanf(lockers[0].id.cookie, RBD_LOCK_COOKIE_PREFIX " %llu", + &handle); + if (ret != 1) { rbd_warn(rbd_dev, "locked by external mechanism, cookie %s", - (*lockers)[0].id.cookie); - ret = -EBUSY; - goto out; + lockers[0].id.cookie); + goto err_busy; } + if (ceph_addr_is_blank(&lockers[0].info.addr)) { + rbd_warn(rbd_dev, "locker has a blank address"); + goto err_busy; + } + + dout("%s rbd_dev %p got locker %s%llu@%pISpc/%u handle %llu\n", + __func__, rbd_dev, ENTITY_NAME(lockers[0].id.name), + &lockers[0].info.addr.in_addr, + le32_to_cpu(lockers[0].info.addr.nonce), handle); out: kfree(lock_tag); - return ret; + return lockers; + +err_busy: + kfree(lock_tag); + ceph_free_lockers(lockers, num_lockers); + return ERR_PTR(-EBUSY); } static int find_watcher(struct rbd_device *rbd_dev, @@ -3947,51 +3978,68 @@ out: static int rbd_try_lock(struct rbd_device *rbd_dev) { struct ceph_client *client = rbd_dev->rbd_client->client; - struct ceph_locker *lockers; - u32 num_lockers; + struct ceph_locker *locker, *refreshed_locker; int ret; for (;;) { + locker = refreshed_locker = NULL; + ret = rbd_lock(rbd_dev); if (ret != -EBUSY) - return ret; + goto out; /* determine if the current lock holder is still alive */ - ret = get_lock_owner_info(rbd_dev, &lockers, &num_lockers); - if (ret) - return ret; - - if (num_lockers == 0) + locker = get_lock_owner_info(rbd_dev); + if (IS_ERR(locker)) { + ret = PTR_ERR(locker); + locker = NULL; + goto out; + } + if (!locker) goto again; - ret = find_watcher(rbd_dev, lockers); + ret = find_watcher(rbd_dev, locker); if (ret) goto out; /* request lock or error */ + refreshed_locker = get_lock_owner_info(rbd_dev); + if (IS_ERR(refreshed_locker)) { + ret = PTR_ERR(refreshed_locker); + refreshed_locker = NULL; + goto out; + } + if (!refreshed_locker || + !locker_equal(locker, refreshed_locker)) + goto again; + rbd_warn(rbd_dev, "breaking header lock owned by %s%llu", - ENTITY_NAME(lockers[0].id.name)); + ENTITY_NAME(locker->id.name)); ret = ceph_monc_blocklist_add(&client->monc, - &lockers[0].info.addr); + &locker->info.addr); if (ret) { - rbd_warn(rbd_dev, "blocklist of %s%llu failed: %d", - ENTITY_NAME(lockers[0].id.name), ret); + rbd_warn(rbd_dev, "failed to blocklist %s%llu: %d", + ENTITY_NAME(locker->id.name), ret); goto out; } ret = ceph_cls_break_lock(&client->osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, RBD_LOCK_NAME, - lockers[0].id.cookie, - &lockers[0].id.name); - if (ret && ret != -ENOENT) + locker->id.cookie, &locker->id.name); + if (ret && ret != -ENOENT) { + rbd_warn(rbd_dev, "failed to break header lock: %d", + ret); goto out; + } again: - ceph_free_lockers(lockers, num_lockers); + free_locker(refreshed_locker); + free_locker(locker); } out: - ceph_free_lockers(lockers, num_lockers); + free_locker(refreshed_locker); + free_locker(locker); return ret; } diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 33d3298a0da16..e6b6e5eee4dea 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1632,7 +1632,8 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd) if (ublksrv_pid <= 0) return -EINVAL; - wait_for_completion_interruptible(&ub->completion); + if (wait_for_completion_interruptible(&ub->completion) != 0) + return -EINTR; schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD); @@ -1908,8 +1909,8 @@ static int ublk_ctrl_del_dev(struct ublk_device **p_ub) * - the device number is freed already, we will not find this * device via ublk_get_device_from_id() */ - wait_event_interruptible(ublk_idr_wq, ublk_idr_freed(idx)); - + if (wait_event_interruptible(ublk_idr_wq, ublk_idr_freed(idx))) + return -EINTR; return 0; } @@ -2106,7 +2107,9 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub, pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n", __func__, ub->dev_info.nr_hw_queues, header->dev_id); /* wait until new ubq_daemon sending all FETCH_REQ */ - wait_for_completion_interruptible(&ub->completion); + if (wait_for_completion_interruptible(&ub->completion)) + return -EINTR; + pr_devel("%s: All new ubq_daemons(nr: %d) are ready, dev id %d\n", __func__, ub->dev_info.nr_hw_queues, header->dev_id); diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c index 88a5384c09c02..b95963095729a 100644 --- a/drivers/char/tpm/tpm_tis_core.c +++ b/drivers/char/tpm/tpm_tis_core.c @@ -366,8 +366,13 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) goto out; } - size += recv_data(chip, &buf[TPM_HEADER_SIZE], - expected - TPM_HEADER_SIZE); + rc = recv_data(chip, &buf[TPM_HEADER_SIZE], + expected - TPM_HEADER_SIZE); + if (rc < 0) { + size = rc; + goto out; + } + size += rc; if (size < expected) { dev_err(&chip->dev, "Unable to read remainder of result\n"); size = -ETIME; diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 7e1765b09e04a..8757bf886207b 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -296,9 +296,8 @@ err_xormap: else rc = cxl_decoder_autoremove(dev, cxld); if (rc) { - dev_err(dev, "Failed to add decode range [%#llx - %#llx]\n", - cxld->hpa_range.start, cxld->hpa_range.end); - return 0; + dev_err(dev, "Failed to add decode range: %pr", res); + return rc; } dev_dbg(dev, "add: %s node: %d range [%#llx - %#llx]\n", dev_name(&cxld->dev), diff --git a/drivers/dma-buf/dma-fence-unwrap.c b/drivers/dma-buf/dma-fence-unwrap.c index 7002bca792ff0..c625bb2b5d563 100644 --- a/drivers/dma-buf/dma-fence-unwrap.c +++ b/drivers/dma-buf/dma-fence-unwrap.c @@ -66,18 +66,36 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences, { struct dma_fence_array *result; struct dma_fence *tmp, **array; + ktime_t timestamp; unsigned int i; size_t count; count = 0; + timestamp = ns_to_ktime(0); for (i = 0; i < num_fences; ++i) { - dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) - if (!dma_fence_is_signaled(tmp)) + dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) { + if (!dma_fence_is_signaled(tmp)) { ++count; + } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, + &tmp->flags)) { + if (ktime_after(tmp->timestamp, timestamp)) + timestamp = tmp->timestamp; + } else { + /* + * Use the current time if the fence is + * currently signaling. + */ + timestamp = ktime_get(); + } + } } + /* + * If we couldn't find a pending fence just return a private signaled + * fence with the timestamp of the last signaled one. + */ if (count == 0) - return dma_fence_get_stub(); + return dma_fence_allocate_private_stub(timestamp); array = kmalloc_array(count, sizeof(*array), GFP_KERNEL); if (!array) @@ -138,7 +156,7 @@ restart: } while (tmp); if (count == 0) { - tmp = dma_fence_get_stub(); + tmp = dma_fence_allocate_private_stub(ktime_get()); goto return_tmp; } diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index f177c56269bb0..8aa8f8cb7071e 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -150,16 +150,17 @@ EXPORT_SYMBOL(dma_fence_get_stub); /** * dma_fence_allocate_private_stub - return a private, signaled fence + * @timestamp: timestamp when the fence was signaled * * Return a newly allocated and signaled stub fence. */ -struct dma_fence *dma_fence_allocate_private_stub(void) +struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp) { struct dma_fence *fence; fence = kzalloc(sizeof(*fence), GFP_KERNEL); if (fence == NULL) - return ERR_PTR(-ENOMEM); + return NULL; dma_fence_init(fence, &dma_fence_stub_ops, @@ -169,7 +170,7 @@ struct dma_fence *dma_fence_allocate_private_stub(void) set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); - dma_fence_signal(fence); + dma_fence_signal_timestamp(fence, timestamp); return fence; } diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c index a68f682aec012..67497116ce27d 100644 --- a/drivers/gpio/gpio-mvebu.c +++ b/drivers/gpio/gpio-mvebu.c @@ -874,7 +874,7 @@ static int mvebu_pwm_probe(struct platform_device *pdev, spin_lock_init(&mvpwm->lock); - return pwmchip_add(&mvpwm->chip); + return devm_pwmchip_add(dev, &mvpwm->chip); } #ifdef CONFIG_DEBUG_FS @@ -1112,6 +1112,13 @@ static int mvebu_gpio_probe_syscon(struct platform_device *pdev, return 0; } +static void mvebu_gpio_remove_irq_domain(void *data) +{ + struct irq_domain *domain = data; + + irq_domain_remove(domain); +} + static int mvebu_gpio_probe(struct platform_device *pdev) { struct mvebu_gpio_chip *mvchip; @@ -1243,17 +1250,21 @@ static int mvebu_gpio_probe(struct platform_device *pdev) if (!mvchip->domain) { dev_err(&pdev->dev, "couldn't allocate irq domain %s (DT).\n", mvchip->chip.label); - err = -ENODEV; - goto err_pwm; + return -ENODEV; } + err = devm_add_action_or_reset(&pdev->dev, mvebu_gpio_remove_irq_domain, + mvchip->domain); + if (err) + return err; + err = irq_alloc_domain_generic_chips( mvchip->domain, ngpios, 2, np->name, handle_level_irq, IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_LEVEL, 0, 0); if (err) { dev_err(&pdev->dev, "couldn't allocate irq chips %s (DT).\n", mvchip->chip.label); - goto err_domain; + return err; } /* @@ -1293,13 +1304,6 @@ static int mvebu_gpio_probe(struct platform_device *pdev) } return 0; - -err_domain: - irq_domain_remove(mvchip->domain); -err_pwm: - pwmchip_remove(&mvchip->mvpwm->chip); - - return err; } static struct platform_driver mvebu_gpio_driver = { diff --git a/drivers/gpio/gpio-tps68470.c b/drivers/gpio/gpio-tps68470.c index aaddcabe9b359..532deaddfd4e2 100644 --- a/drivers/gpio/gpio-tps68470.c +++ b/drivers/gpio/gpio-tps68470.c @@ -91,13 +91,13 @@ static int tps68470_gpio_output(struct gpio_chip *gc, unsigned int offset, struct tps68470_gpio_data *tps68470_gpio = gpiochip_get_data(gc); struct regmap *regmap = tps68470_gpio->tps68470_regmap; + /* Set the initial value */ + tps68470_gpio_set(gc, offset, value); + /* rest are always outputs */ if (offset >= TPS68470_N_REGULAR_GPIO) return 0; - /* Set the initial value */ - tps68470_gpio_set(gc, offset, value); - return regmap_update_bits(regmap, TPS68470_GPIO_CTL_REG_A(offset), TPS68470_GPIO_MODE_MASK, TPS68470_GPIO_MODE_OUT_CMOS); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 02b827785e399..129081ffa0a5f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1246,6 +1246,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, void amdgpu_device_pci_config_reset(struct amdgpu_device *adev); int amdgpu_device_pci_reset(struct amdgpu_device *adev); bool amdgpu_device_need_post(struct amdgpu_device *adev); +bool amdgpu_device_pcie_dynamic_switching_supported(void); bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev); bool amdgpu_device_aspm_support_quirk(void); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 5c7d40873ee20..167b2a1c416eb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -1352,6 +1352,25 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev) return true; } +/* + * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic + * speed switching. Until we have confirmation from Intel that a specific host + * supports it, it's safer that we keep it disabled for all. + * + * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/ + * https://gitlab.freedesktop.org/drm/amd/-/issues/2663 + */ +bool amdgpu_device_pcie_dynamic_switching_supported(void) +{ +#if IS_ENABLED(CONFIG_X86) + struct cpuinfo_x86 *c = &cpu_data(0); + + if (c->x86_vendor == X86_VENDOR_INTEL) + return false; +#endif + return true; +} + /** * amdgpu_device_should_use_aspm - check if the device should program ASPM * diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c index e4757a2807d9a..db820331f2c61 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c @@ -491,11 +491,11 @@ static int psp_sw_init(void *handle) return 0; failed2: - amdgpu_bo_free_kernel(&psp->fw_pri_bo, - &psp->fw_pri_mc_addr, &psp->fw_pri_buf); -failed1: amdgpu_bo_free_kernel(&psp->fence_buf_bo, &psp->fence_buf_mc_addr, &psp->fence_buf); +failed1: + amdgpu_bo_free_kernel(&psp->fw_pri_bo, + &psp->fw_pri_mc_addr, &psp->fw_pri_buf); return ret; } diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index 888e80f498e97..9bc86deac9e8e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -706,7 +706,7 @@ void dm_handle_mst_sideband_msg_ready_event( if (retry == 3) { DRM_ERROR("Failed to ack MST event.\n"); - return; + break; } drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr); diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c index d647f68fd5630..4f61d4f257cd7 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c @@ -24,6 +24,7 @@ */ #include "amdgpu_dm_psr.h" +#include "dc_dmub_srv.h" #include "dc.h" #include "dm_helpers.h" #include "amdgpu_dm.h" @@ -50,7 +51,7 @@ static bool link_supports_psrsu(struct dc_link *link) !link->dpcd_caps.psr_info.psr2_su_y_granularity_cap) return false; - return true; + return dc_dmub_check_min_version(dc->ctx->dmub_srv->dmub); } /* diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index 6eace83c9c6f5..d22095a3a265a 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -2626,8 +2626,11 @@ static enum surface_update_type check_update_surfaces_for_stream( if (stream_update->mst_bw_update) su_flags->bits.mst_bw = 1; - if (stream_update->crtc_timing_adjust && dc_extended_blank_supported(dc)) - su_flags->bits.crtc_timing_adjust = 1; + + if (stream_update->stream && stream_update->stream->freesync_on_desktop && + (stream_update->vrr_infopacket || stream_update->allow_freesync || + stream_update->vrr_active_variable)) + su_flags->bits.fams_changed = 1; if (su_flags->raw != 0) overall_type = UPDATE_TYPE_FULL; @@ -4894,21 +4897,3 @@ void dc_notify_vsync_int_state(struct dc *dc, struct dc_stream_state *stream, bo if (pipe->stream_res.abm && pipe->stream_res.abm->funcs->set_abm_pause) pipe->stream_res.abm->funcs->set_abm_pause(pipe->stream_res.abm, !enable, i, pipe->stream_res.tg->inst); } - -/** - * dc_extended_blank_supported - Decide whether extended blank is supported - * - * @dc: [in] Current DC state - * - * Extended blank is a freesync optimization feature to be enabled in the - * future. During the extra vblank period gained from freesync, we have the - * ability to enter z9/z10. - * - * Return: - * Indicate whether extended blank is supported (%true or %false) - */ -bool dc_extended_blank_supported(struct dc *dc) -{ - return dc->debug.extended_blank_optimization && !dc->debug.disable_z10 - && dc->caps.zstate_support && dc->caps.is_apu; -} diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index 4d93ca9c627b0..9279990e43694 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -855,7 +855,6 @@ struct dc_debug_options { bool force_usr_allow; /* uses value at boot and disables switch */ bool disable_dtb_ref_clk_switch; - uint32_t fixed_vs_aux_delay_config_wa; bool extended_blank_optimization; union aux_wake_wa_options aux_wake_wa; uint32_t mst_start_top_delay; @@ -2126,8 +2125,6 @@ struct dc_sink_init_data { bool converter_disable_audio; }; -bool dc_extended_blank_supported(struct dc *dc); - struct dc_sink *dc_sink_create(const struct dc_sink_init_data *init_params); /* Newer interfaces */ diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c index a9b9490a532c2..ab4542b57b9a3 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c @@ -1079,3 +1079,10 @@ void dc_send_update_cursor_info_to_dmu( dc_send_cmd_to_dmu(pCtx->stream->ctx->dmub_srv, &cmd); } } + +bool dc_dmub_check_min_version(struct dmub_srv *srv) +{ + if (!srv->hw_funcs.is_psrsu_supported) + return true; + return srv->hw_funcs.is_psrsu_supported(srv); +} diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h index d34f5563df2ec..9a248ced03b9c 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h +++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h @@ -89,4 +89,5 @@ void dc_dmub_setup_subvp_dmub_command(struct dc *dc, struct dc_state *context, b void dc_dmub_srv_log_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv); void dc_send_update_cursor_info_to_dmu(struct pipe_ctx *pCtx, uint8_t pipe_idx); +bool dc_dmub_check_min_version(struct dmub_srv *srv); #endif /* _DMUB_DC_SRV_H_ */ diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h index 25284006019c3..270282fbda4ab 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_stream.h +++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h @@ -131,6 +131,7 @@ union stream_update_flags { uint32_t dsc_changed : 1; uint32_t mst_bw : 1; uint32_t crtc_timing_adjust : 1; + uint32_t fams_changed : 1; } bits; uint32_t raw; diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h index 45ab48fe5d004..139a77acd5d02 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_types.h +++ b/drivers/gpu/drm/amd/display/dc/dc_types.h @@ -196,6 +196,7 @@ struct dc_panel_patch { unsigned int disable_fams; unsigned int skip_avmute; unsigned int mst_start_top_delay; + unsigned int delay_disable_aux_intercept_ms; }; struct dc_edid_caps { diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c index c38be3c6c234e..a621b6a27c1fc 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c @@ -2128,7 +2128,7 @@ void dcn20_optimize_bandwidth( dc->clk_mgr, context, true); - if (dc_extended_blank_supported(dc) && context->bw_ctx.bw.dcn.clk.zstate_support == DCN_ZSTATE_SUPPORT_ALLOW) { + if (context->bw_ctx.bw.dcn.clk.zstate_support == DCN_ZSTATE_SUPPORT_ALLOW) { for (i = 0; i < dc->res_pool->pipe_count; ++i) { struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; @@ -2136,7 +2136,7 @@ void dcn20_optimize_bandwidth( && pipe_ctx->stream->adjust.v_total_min == pipe_ctx->stream->adjust.v_total_max && pipe_ctx->stream->adjust.v_total_max > pipe_ctx->stream->timing.v_total) pipe_ctx->plane_res.hubp->funcs->program_extended_blank(pipe_ctx->plane_res.hubp, - pipe_ctx->dlg_regs.optimized_min_dst_y_next_start); + pipe_ctx->dlg_regs.min_dst_y_next_start); } } } diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c index c95f000b63b28..34b08d90dc1da 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c @@ -301,7 +301,12 @@ static void optc3_wait_drr_doublebuffer_pending_clear(struct timing_generator *o void optc3_set_vtotal_min_max(struct timing_generator *optc, int vtotal_min, int vtotal_max) { - optc1_set_vtotal_min_max(optc, vtotal_min, vtotal_max); + struct dc *dc = optc->ctx->dc; + + if (dc->caps.dmub_caps.mclk_sw && !dc->debug.disable_fams) + dc_dmub_srv_drr_update_cmd(dc, optc->inst, vtotal_min, vtotal_max); + else + optc1_set_vtotal_min_max(optc, vtotal_min, vtotal_max); } void optc3_tg_init(struct timing_generator *optc) diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c index 7e7cd5b64e6a1..7445ed27852a1 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c +++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c @@ -103,6 +103,7 @@ static void dcn31_program_det_size(struct hubbub *hubbub, int hubp_inst, unsigne default: break; } + DC_LOG_DEBUG("Set DET%d to %d segments\n", hubp_inst, det_size_segments); /* Should never be hit, if it is we have an erroneous hw config*/ ASSERT(hubbub2->det0_size + hubbub2->det1_size + hubbub2->det2_size + hubbub2->det3_size + hubbub2->compbuf_size_segments <= hubbub2->crb_size_segs); diff --git a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c index 41c972c8eb198..ae99b2851e019 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c @@ -136,6 +136,9 @@ #define DCN3_15_MAX_DET_SIZE 384 #define DCN3_15_CRB_SEGMENT_SIZE_KB 64 +#define DCN3_15_MAX_DET_SEGS (DCN3_15_MAX_DET_SIZE / DCN3_15_CRB_SEGMENT_SIZE_KB) +/* Minimum 2 extra segments need to be in compbuf and claimable to guarantee seamless mpo transitions */ +#define MIN_RESERVED_DET_SEGS 2 enum dcn31_clk_src_array_id { DCN31_CLK_SRC_PLL0, @@ -1636,21 +1639,61 @@ static bool is_dual_plane(enum surface_pixel_format format) return format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN || format == SURFACE_PIXEL_FORMAT_GRPH_RGBE_ALPHA; } +static int source_format_to_bpp (enum source_format_class SourcePixelFormat) +{ + if (SourcePixelFormat == dm_444_64) + return 8; + else if (SourcePixelFormat == dm_444_16 || SourcePixelFormat == dm_444_16) + return 2; + else if (SourcePixelFormat == dm_444_8) + return 1; + else if (SourcePixelFormat == dm_rgbe_alpha) + return 5; + else if (SourcePixelFormat == dm_420_8) + return 3; + else if (SourcePixelFormat == dm_420_12) + return 6; + else + return 4; +} + +static bool allow_pixel_rate_crb(struct dc *dc, struct dc_state *context) +{ + int i; + struct resource_context *res_ctx = &context->res_ctx; + + /*Don't apply for single stream*/ + if (context->stream_count < 2) + return false; + + for (i = 0; i < dc->res_pool->pipe_count; i++) { + if (!res_ctx->pipe_ctx[i].stream) + continue; + + /*Don't apply if MPO to avoid transition issues*/ + if (res_ctx->pipe_ctx[i].top_pipe && res_ctx->pipe_ctx[i].top_pipe->plane_state != res_ctx->pipe_ctx[i].plane_state) + return false; + } + return true; +} + static int dcn315_populate_dml_pipes_from_context( struct dc *dc, struct dc_state *context, display_e2e_pipe_params_st *pipes, bool fast_validate) { - int i, pipe_cnt; + int i, pipe_cnt, crb_idx, crb_pipes; struct resource_context *res_ctx = &context->res_ctx; struct pipe_ctx *pipe; const int max_usable_det = context->bw_ctx.dml.ip.config_return_buffer_size_in_kbytes - DCN3_15_MIN_COMPBUF_SIZE_KB; + int remaining_det_segs = max_usable_det / DCN3_15_CRB_SEGMENT_SIZE_KB; + bool pixel_rate_crb = allow_pixel_rate_crb(dc, context); DC_FP_START(); dcn31x_populate_dml_pipes_from_context(dc, context, pipes, fast_validate); DC_FP_END(); - for (i = 0, pipe_cnt = 0; i < dc->res_pool->pipe_count; i++) { + for (i = 0, pipe_cnt = 0, crb_pipes = 0; i < dc->res_pool->pipe_count; i++) { struct dc_crtc_timing *timing; if (!res_ctx->pipe_ctx[i].stream) @@ -1671,6 +1714,23 @@ static int dcn315_populate_dml_pipes_from_context( pipes[pipe_cnt].dout.dsc_input_bpc = 0; DC_FP_START(); dcn31_zero_pipe_dcc_fraction(pipes, pipe_cnt); + if (pixel_rate_crb && !pipe->top_pipe && !pipe->prev_odm_pipe) { + int bpp = source_format_to_bpp(pipes[pipe_cnt].pipe.src.source_format); + /* Ceil to crb segment size */ + int approx_det_segs_required_for_pstate = dcn_get_approx_det_segs_required_for_pstate( + &context->bw_ctx.dml.soc, timing->pix_clk_100hz, bpp, DCN3_15_CRB_SEGMENT_SIZE_KB); + if (approx_det_segs_required_for_pstate <= 2 * DCN3_15_MAX_DET_SEGS) { + bool split_required = approx_det_segs_required_for_pstate > DCN3_15_MAX_DET_SEGS; + split_required = split_required || timing->pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc); + split_required = split_required || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120); + if (split_required) + approx_det_segs_required_for_pstate += approx_det_segs_required_for_pstate % 2; + pipes[pipe_cnt].pipe.src.det_size_override = approx_det_segs_required_for_pstate; + remaining_det_segs -= approx_det_segs_required_for_pstate; + } else + remaining_det_segs = -1; + crb_pipes++; + } DC_FP_END(); if (pipes[pipe_cnt].dout.dsc_enable) { @@ -1689,16 +1749,54 @@ static int dcn315_populate_dml_pipes_from_context( break; } } - pipe_cnt++; } + /* Spread remaining unreserved crb evenly among all pipes*/ + if (pixel_rate_crb) { + for (i = 0, pipe_cnt = 0, crb_idx = 0; i < dc->res_pool->pipe_count; i++) { + pipe = &res_ctx->pipe_ctx[i]; + if (!pipe->stream) + continue; + + /* Do not use asymetric crb if not enough for pstate support */ + if (remaining_det_segs < 0) { + pipes[pipe_cnt].pipe.src.det_size_override = 0; + continue; + } + + if (!pipe->top_pipe && !pipe->prev_odm_pipe) { + bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc) + || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120); + + if (remaining_det_segs > MIN_RESERVED_DET_SEGS) + pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes + + (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0); + if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) { + /* Clamp to 2 pipe split max det segments */ + remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override - 2 * (DCN3_15_MAX_DET_SEGS); + pipes[pipe_cnt].pipe.src.det_size_override = 2 * DCN3_15_MAX_DET_SEGS; + } + if (pipes[pipe_cnt].pipe.src.det_size_override > DCN3_15_MAX_DET_SEGS || split_required) { + /* If we are splitting we must have an even number of segments */ + remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override % 2; + pipes[pipe_cnt].pipe.src.det_size_override -= pipes[pipe_cnt].pipe.src.det_size_override % 2; + } + /* Convert segments into size for DML use */ + pipes[pipe_cnt].pipe.src.det_size_override *= DCN3_15_CRB_SEGMENT_SIZE_KB; + + crb_idx++; + } + pipe_cnt++; + } + } + if (pipe_cnt) context->bw_ctx.dml.ip.det_buffer_size_kbytes = (max_usable_det / DCN3_15_CRB_SEGMENT_SIZE_KB / pipe_cnt) * DCN3_15_CRB_SEGMENT_SIZE_KB; if (context->bw_ctx.dml.ip.det_buffer_size_kbytes > DCN3_15_MAX_DET_SIZE) context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_15_MAX_DET_SIZE; - ASSERT(context->bw_ctx.dml.ip.det_buffer_size_kbytes >= DCN3_15_DEFAULT_DET_SIZE); + dc->config.enable_4to1MPC = false; if (pipe_cnt == 1 && pipe->plane_state && !dc->debug.disable_z9_mpc) { if (is_dual_plane(pipe->plane_state->format) diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c index f1c1a4b5fcac3..7661f8946aa31 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c @@ -948,10 +948,10 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc { int plane_count; int i; - unsigned int optimized_min_dst_y_next_start_us; + unsigned int min_dst_y_next_start_us; plane_count = 0; - optimized_min_dst_y_next_start_us = 0; + min_dst_y_next_start_us = 0; for (i = 0; i < dc->res_pool->pipe_count; i++) { if (context->res_ctx.pipe_ctx[i].plane_state) plane_count++; @@ -973,19 +973,18 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; struct dc_stream_status *stream_status = &context->stream_status[0]; + struct dc_stream_state *current_stream = context->streams[0]; int minmum_z8_residency = dc->debug.minimum_z8_residency_time > 0 ? dc->debug.minimum_z8_residency_time : 1000; bool allow_z8 = context->bw_ctx.dml.vba.StutterPeriod > (double)minmum_z8_residency; bool is_pwrseq0 = link->link_index == 0; + bool isFreesyncVideo; - if (dc_extended_blank_supported(dc)) { - for (i = 0; i < dc->res_pool->pipe_count; i++) { - if (context->res_ctx.pipe_ctx[i].stream == context->streams[0] - && context->res_ctx.pipe_ctx[i].stream->adjust.v_total_min == context->res_ctx.pipe_ctx[i].stream->adjust.v_total_max - && context->res_ctx.pipe_ctx[i].stream->adjust.v_total_min > context->res_ctx.pipe_ctx[i].stream->timing.v_total) { - optimized_min_dst_y_next_start_us = - context->res_ctx.pipe_ctx[i].dlg_regs.optimized_min_dst_y_next_start_us; - break; - } + isFreesyncVideo = current_stream->adjust.v_total_min == current_stream->adjust.v_total_max; + isFreesyncVideo = isFreesyncVideo && current_stream->timing.v_total < current_stream->adjust.v_total_min; + for (i = 0; i < dc->res_pool->pipe_count; i++) { + if (context->res_ctx.pipe_ctx[i].stream == current_stream && isFreesyncVideo) { + min_dst_y_next_start_us = context->res_ctx.pipe_ctx[i].dlg_regs.min_dst_y_next_start_us; + break; } } @@ -993,7 +992,7 @@ static enum dcn_zstate_support_state decide_zstate_support(struct dc *dc, struc if (stream_status->plane_count > 1) return DCN_ZSTATE_SUPPORT_DISALLOW; - if (is_pwrseq0 && (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || optimized_min_dst_y_next_start_us > 5000)) + if (is_pwrseq0 && (context->bw_ctx.dml.vba.StutterPeriod > 5000.0 || min_dst_y_next_start_us > 5000)) return DCN_ZSTATE_SUPPORT_ALLOW; else if (is_pwrseq0 && link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr) return allow_z8 ? DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY : DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c index 59836570603ac..19d034341e640 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c @@ -483,7 +483,7 @@ void dcn31_calculate_wm_and_dlg_fp( int pipe_cnt, int vlevel) { - int i, pipe_idx, active_hubp_count = 0; + int i, pipe_idx, total_det = 0, active_hubp_count = 0; double dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb]; dc_assert_fp_enabled(); @@ -563,6 +563,18 @@ void dcn31_calculate_wm_and_dlg_fp( if (context->res_ctx.pipe_ctx[i].stream) context->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz = 0; } + for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) { + if (!context->res_ctx.pipe_ctx[i].stream) + continue; + + context->res_ctx.pipe_ctx[i].det_buffer_size_kb = + get_det_buffer_size_kbytes(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx); + if (context->res_ctx.pipe_ctx[i].det_buffer_size_kb > 384) + context->res_ctx.pipe_ctx[i].det_buffer_size_kb /= 2; + total_det += context->res_ctx.pipe_ctx[i].det_buffer_size_kb; + pipe_idx++; + } + context->bw_ctx.bw.dcn.compbuf_size_kb = context->bw_ctx.dml.ip.config_return_buffer_size_in_kbytes - total_det; } void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params) @@ -815,3 +827,14 @@ int dcn_get_max_non_odm_pix_rate_100hz(struct _vcs_dpi_soc_bounding_box_st *soc) { return soc->clock_limits[0].dispclk_mhz * 10000.0 / (1.0 + soc->dcn_downspread_percent / 100.0); } + +int dcn_get_approx_det_segs_required_for_pstate( + struct _vcs_dpi_soc_bounding_box_st *soc, + int pix_clk_100hz, int bpp, int seg_size_kb) +{ + /* Roughly calculate required crb to hide latency. In practice there is slightly + * more buffer available for latency hiding + */ + return (int)(soc->dram_clock_change_latency_us * pix_clk_100hz * bpp + / 10240000 + seg_size_kb - 1) / seg_size_kb; +} diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h index 687d3522cc33e..8f9c8faed2605 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h @@ -47,6 +47,9 @@ void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params); void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params); int dcn_get_max_non_odm_pix_rate_100hz(struct _vcs_dpi_soc_bounding_box_st *soc); +int dcn_get_approx_det_segs_required_for_pstate( + struct _vcs_dpi_soc_bounding_box_st *soc, + int pix_clk_100hz, int bpp, int seg_size_kb); int dcn31x_populate_dml_pipes_from_context(struct dc *dc, struct dc_state *context, diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c index bd674dc30df33..a0f44eef7763f 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c @@ -532,7 +532,8 @@ static void CalculateStutterEfficiency( static void CalculateSwathAndDETConfiguration( bool ForceSingleDPP, int NumberOfActivePlanes, - unsigned int DETBufferSizeInKByte, + bool DETSharedByAllDPP, + unsigned int DETBufferSizeInKByte[], double MaximumSwathWidthLuma[], double MaximumSwathWidthChroma[], enum scan_direction_class SourceScan[], @@ -3118,7 +3119,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman v->SurfaceWidthC[k], v->SurfaceHeightY[k], v->SurfaceHeightC[k], - v->DETBufferSizeInKByte[0] * 1024, + v->DETBufferSizeInKByte[k] * 1024, v->BlockHeight256BytesY[k], v->BlockHeight256BytesC[k], v->SurfaceTiling[k], @@ -3313,7 +3314,8 @@ static void DisplayPipeConfiguration(struct display_mode_lib *mode_lib) CalculateSwathAndDETConfiguration( false, v->NumberOfActivePlanes, - v->DETBufferSizeInKByte[0], + mode_lib->project == DML_PROJECT_DCN315 && v->DETSizeOverride[0], + v->DETBufferSizeInKByte, dummy1, dummy2, v->SourceScan, @@ -3779,14 +3781,16 @@ static noinline void CalculatePrefetchSchedulePerPlane( &v->VReadyOffsetPix[k]); } -static void PatchDETBufferSizeInKByte(unsigned int NumberOfActivePlanes, int NoOfDPPThisState[], unsigned int config_return_buffer_size_in_kbytes, unsigned int *DETBufferSizeInKByte) +static void PatchDETBufferSizeInKByte(unsigned int NumberOfActivePlanes, int NoOfDPPThisState[], unsigned int config_return_buffer_size_in_kbytes, unsigned int DETBufferSizeInKByte[]) { int i, total_pipes = 0; for (i = 0; i < NumberOfActivePlanes; i++) total_pipes += NoOfDPPThisState[i]; - *DETBufferSizeInKByte = ((config_return_buffer_size_in_kbytes - DCN3_15_MIN_COMPBUF_SIZE_KB) / 64 / total_pipes) * 64; - if (*DETBufferSizeInKByte > DCN3_15_MAX_DET_SIZE) - *DETBufferSizeInKByte = DCN3_15_MAX_DET_SIZE; + DETBufferSizeInKByte[0] = ((config_return_buffer_size_in_kbytes - DCN3_15_MIN_COMPBUF_SIZE_KB) / 64 / total_pipes) * 64; + if (DETBufferSizeInKByte[0] > DCN3_15_MAX_DET_SIZE) + DETBufferSizeInKByte[0] = DCN3_15_MAX_DET_SIZE; + for (i = 1; i < NumberOfActivePlanes; i++) + DETBufferSizeInKByte[i] = DETBufferSizeInKByte[0]; } @@ -4026,7 +4030,8 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l CalculateSwathAndDETConfiguration( true, v->NumberOfActivePlanes, - v->DETBufferSizeInKByte[0], + mode_lib->project == DML_PROJECT_DCN315 && v->DETSizeOverride[0], + v->DETBufferSizeInKByte, v->MaximumSwathWidthLuma, v->MaximumSwathWidthChroma, v->SourceScan, @@ -4166,6 +4171,10 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l || (v->PlaneRequiredDISPCLK > v->MaxDispclkRoundedDownToDFSGranularity)) { v->DISPCLK_DPPCLK_Support[i][j] = false; } + if (mode_lib->project == DML_PROJECT_DCN315 && v->DETSizeOverride[k] > DCN3_15_MAX_DET_SIZE && v->NoOfDPP[i][j][k] < 2) { + v->MPCCombine[i][j][k] = true; + v->NoOfDPP[i][j][k] = 2; + } } v->TotalNumberOfActiveDPP[i][j] = 0; v->TotalNumberOfSingleDPPPlanes[i][j] = 0; @@ -4642,12 +4651,13 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l v->ODMCombineEnableThisState[k] = v->ODMCombineEnablePerState[i][k]; } - if (v->NumberOfActivePlanes > 1 && mode_lib->project == DML_PROJECT_DCN315) - PatchDETBufferSizeInKByte(v->NumberOfActivePlanes, v->NoOfDPPThisState, v->ip.config_return_buffer_size_in_kbytes, &v->DETBufferSizeInKByte[0]); + if (v->NumberOfActivePlanes > 1 && mode_lib->project == DML_PROJECT_DCN315 && !v->DETSizeOverride[0]) + PatchDETBufferSizeInKByte(v->NumberOfActivePlanes, v->NoOfDPPThisState, v->ip.config_return_buffer_size_in_kbytes, v->DETBufferSizeInKByte); CalculateSwathAndDETConfiguration( false, v->NumberOfActivePlanes, - v->DETBufferSizeInKByte[0], + mode_lib->project == DML_PROJECT_DCN315 && v->DETSizeOverride[0], + v->DETBufferSizeInKByte, v->MaximumSwathWidthLuma, v->MaximumSwathWidthChroma, v->SourceScan, @@ -6611,7 +6621,8 @@ static void CalculateStutterEfficiency( static void CalculateSwathAndDETConfiguration( bool ForceSingleDPP, int NumberOfActivePlanes, - unsigned int DETBufferSizeInKByte, + bool DETSharedByAllDPP, + unsigned int DETBufferSizeInKByteA[], double MaximumSwathWidthLuma[], double MaximumSwathWidthChroma[], enum scan_direction_class SourceScan[], @@ -6695,6 +6706,10 @@ static void CalculateSwathAndDETConfiguration( *ViewportSizeSupport = true; for (k = 0; k < NumberOfActivePlanes; ++k) { + unsigned int DETBufferSizeInKByte = DETBufferSizeInKByteA[k]; + + if (DETSharedByAllDPP && DPPPerPlane[k]) + DETBufferSizeInKByte /= DPPPerPlane[k]; if ((SourcePixelFormat[k] == dm_444_64 || SourcePixelFormat[k] == dm_444_32 || SourcePixelFormat[k] == dm_444_16 || SourcePixelFormat[k] == dm_mono_16 || SourcePixelFormat[k] == dm_mono_8 || SourcePixelFormat[k] == dm_rgbe)) { if (SurfaceTiling[k] == dm_sw_linear diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_rq_dlg_calc_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_rq_dlg_calc_31.c index 2244e4fb8c96d..fcde8f21b8be0 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_rq_dlg_calc_31.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_rq_dlg_calc_31.c @@ -987,8 +987,7 @@ static void dml_rq_dlg_get_dlg_params( dlg_vblank_start = interlaced ? (vblank_start / 2) : vblank_start; disp_dlg_regs->min_dst_y_next_start = (unsigned int) (((double) dlg_vblank_start) * dml_pow(2, 2)); - disp_dlg_regs->optimized_min_dst_y_next_start_us = 0; - disp_dlg_regs->optimized_min_dst_y_next_start = disp_dlg_regs->min_dst_y_next_start; + disp_dlg_regs->min_dst_y_next_start_us = 0; ASSERT(disp_dlg_regs->min_dst_y_next_start < (unsigned int)dml_pow(2, 18)); dml_print("DML_DLG: %s: min_ttu_vblank (us) = %3.2f\n", __func__, min_ttu_vblank); diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c index 9e54e3d0eb780..b878effa2129b 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c @@ -33,7 +33,7 @@ #include "dml/display_mode_vba.h" struct _vcs_dpi_ip_params_st dcn3_14_ip = { - .VBlankNomDefaultUS = 668, + .VBlankNomDefaultUS = 800, .gpuvm_enable = 1, .gpuvm_max_page_table_levels = 1, .hostvm_enable = 1, @@ -286,6 +286,7 @@ int dcn314_populate_dml_pipes_from_context_fpu(struct dc *dc, struct dc_state *c struct resource_context *res_ctx = &context->res_ctx; struct pipe_ctx *pipe; bool upscaled = false; + const unsigned int max_allowed_vblank_nom = 1023; dc_assert_fp_enabled(); @@ -299,9 +300,15 @@ int dcn314_populate_dml_pipes_from_context_fpu(struct dc *dc, struct dc_state *c pipe = &res_ctx->pipe_ctx[i]; timing = &pipe->stream->timing; - if (dc_extended_blank_supported(dc) && pipe->stream->adjust.v_total_max == pipe->stream->adjust.v_total_min - && pipe->stream->adjust.v_total_min > timing->v_total) + if (pipe->stream->adjust.v_total_min != 0) pipes[pipe_cnt].pipe.dest.vtotal = pipe->stream->adjust.v_total_min; + else + pipes[pipe_cnt].pipe.dest.vtotal = timing->v_total; + + pipes[pipe_cnt].pipe.dest.vblank_nom = timing->v_total - pipes[pipe_cnt].pipe.dest.vactive; + pipes[pipe_cnt].pipe.dest.vblank_nom = min(pipes[pipe_cnt].pipe.dest.vblank_nom, dcn3_14_ip.VBlankNomDefaultUS); + pipes[pipe_cnt].pipe.dest.vblank_nom = max(pipes[pipe_cnt].pipe.dest.vblank_nom, timing->v_sync_width); + pipes[pipe_cnt].pipe.dest.vblank_nom = min(pipes[pipe_cnt].pipe.dest.vblank_nom, max_allowed_vblank_nom); if (pipe->plane_state && (pipe->plane_state->src_rect.height < pipe->plane_state->dst_rect.height || @@ -323,8 +330,6 @@ int dcn314_populate_dml_pipes_from_context_fpu(struct dc *dc, struct dc_state *c pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_luma = 0; pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_chroma = 0; pipes[pipe_cnt].pipe.dest.vfront_porch = timing->v_front_porch; - pipes[pipe_cnt].pipe.dest.vblank_nom = - dcn3_14_ip.VBlankNomDefaultUS / (timing->h_total / (timing->pix_clk_100hz / 10000.0)); pipes[pipe_cnt].pipe.src.dcc_rate = 3; pipes[pipe_cnt].dout.dsc_input_bpc = 0; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_rq_dlg_calc_314.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_rq_dlg_calc_314.c index ea4eb66066c42..4f945458b2b7e 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_rq_dlg_calc_314.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_rq_dlg_calc_314.c @@ -1051,7 +1051,6 @@ static void dml_rq_dlg_get_dlg_params( float vba__refcyc_per_req_delivery_pre_l = get_refcyc_per_req_delivery_pre_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; // From VBA float vba__refcyc_per_req_delivery_l = get_refcyc_per_req_delivery_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; // From VBA - int blank_lines = 0; memset(disp_dlg_regs, 0, sizeof(*disp_dlg_regs)); memset(disp_ttu_regs, 0, sizeof(*disp_ttu_regs)); @@ -1075,17 +1074,10 @@ static void dml_rq_dlg_get_dlg_params( min_ttu_vblank = get_min_ttu_vblank_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx); // From VBA dlg_vblank_start = interlaced ? (vblank_start / 2) : vblank_start; - disp_dlg_regs->optimized_min_dst_y_next_start = disp_dlg_regs->min_dst_y_next_start; - disp_dlg_regs->optimized_min_dst_y_next_start_us = 0; - disp_dlg_regs->min_dst_y_next_start = (unsigned int) (((double) dlg_vblank_start) * dml_pow(2, 2)); - blank_lines = (dst->vblank_end + dst->vtotal_min - dst->vblank_start - dst->vstartup_start - 1); - if (blank_lines < 0) - blank_lines = 0; - if (blank_lines != 0) { - disp_dlg_regs->optimized_min_dst_y_next_start = vba__min_dst_y_next_start; - disp_dlg_regs->optimized_min_dst_y_next_start_us = (disp_dlg_regs->optimized_min_dst_y_next_start * dst->hactive) / (unsigned int) dst->pixel_rate_mhz; - disp_dlg_regs->min_dst_y_next_start = disp_dlg_regs->optimized_min_dst_y_next_start; - } + disp_dlg_regs->min_dst_y_next_start_us = + (vba__min_dst_y_next_start * dst->hactive) / (unsigned int) dst->pixel_rate_mhz; + disp_dlg_regs->min_dst_y_next_start = vba__min_dst_y_next_start * dml_pow(2, 2); + ASSERT(disp_dlg_regs->min_dst_y_next_start < (unsigned int)dml_pow(2, 18)); dml_print("DML_DLG: %s: min_ttu_vblank (us) = %3.2f\n", __func__, min_ttu_vblank); diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h index 3c077164f3620..ff0246a9458fd 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h +++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h @@ -619,8 +619,7 @@ struct _vcs_dpi_display_dlg_regs_st { unsigned int refcyc_h_blank_end; unsigned int dlg_vblank_end; unsigned int min_dst_y_next_start; - unsigned int optimized_min_dst_y_next_start; - unsigned int optimized_min_dst_y_next_start_us; + unsigned int min_dst_y_next_start_us; unsigned int refcyc_per_htotal; unsigned int refcyc_x_after_scaler; unsigned int dst_y_after_scaler; diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c index f9653f511baa3..2f63ae954826c 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c +++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c @@ -571,6 +571,10 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib) mode_lib->vba.OutputLinkDPRate[mode_lib->vba.NumberOfActivePlanes] = dout->dp_rate; mode_lib->vba.ODMUse[mode_lib->vba.NumberOfActivePlanes] = dst->odm_combine_policy; mode_lib->vba.DETSizeOverride[mode_lib->vba.NumberOfActivePlanes] = src->det_size_override; + if (src->det_size_override) + mode_lib->vba.DETBufferSizeInKByte[mode_lib->vba.NumberOfActivePlanes] = src->det_size_override; + else + mode_lib->vba.DETBufferSizeInKByte[mode_lib->vba.NumberOfActivePlanes] = ip->det_buffer_size_kbytes; //TODO: Need to assign correct values to dp_multistream vars mode_lib->vba.OutputMultistreamEn[mode_lib->vba.NumberOfActiveSurfaces] = dout->dp_multistream_en; mode_lib->vba.OutputMultistreamId[mode_lib->vba.NumberOfActiveSurfaces] = dout->dp_multistream_id; @@ -785,6 +789,8 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib) mode_lib->vba.pipe_plane[k] = mode_lib->vba.NumberOfActivePlanes; mode_lib->vba.DPPPerPlane[mode_lib->vba.NumberOfActivePlanes]++; + if (src_k->det_size_override) + mode_lib->vba.DETBufferSizeInKByte[mode_lib->vba.NumberOfActivePlanes] = src_k->det_size_override; if (mode_lib->vba.SourceScan[mode_lib->vba.NumberOfActivePlanes] == dm_horz) { mode_lib->vba.ViewportWidth[mode_lib->vba.NumberOfActivePlanes] += diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c index 5731c4b61f9f0..15faaf645b145 100644 --- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c +++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c @@ -233,7 +233,7 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( link->dpcd_caps.lttpr_caps.phy_repeater_cnt); const uint8_t vendor_lttpr_write_data_intercept_en[4] = {0x1, 0x55, 0x63, 0x0}; const uint8_t vendor_lttpr_write_data_intercept_dis[4] = {0x1, 0x55, 0x63, 0x68}; - uint32_t pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa; + uint32_t pre_disable_intercept_delay_ms = 0; uint8_t vendor_lttpr_write_data_vs[4] = {0x1, 0x51, 0x63, 0x0}; uint8_t vendor_lttpr_write_data_pe[4] = {0x1, 0x52, 0x63, 0x0}; uint32_t vendor_lttpr_write_address = 0xF004F; @@ -244,6 +244,10 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( uint8_t toggle_rate; uint8_t rate; + if (link->local_sink) + pre_disable_intercept_delay_ms = + link->local_sink->edid_caps.panel_patch.delay_disable_aux_intercept_ms; + /* Only 8b/10b is supported */ ASSERT(link_dp_get_encoding_format(<_settings->link_settings) == DP_8b_10b_ENCODING); @@ -259,7 +263,7 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( /* Certain display and cable configuration require extra delay */ if (offset > 2) - pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa * 2; + pre_disable_intercept_delay_ms = pre_disable_intercept_delay_ms * 2; } /* Vendor specific: Reset lane settings */ @@ -380,7 +384,8 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( 0); /* Vendor specific: Disable intercept */ for (i = 0; i < max_vendor_dpcd_retries; i++) { - msleep(pre_disable_intercept_delay_ms); + if (pre_disable_intercept_delay_ms != 0) + msleep(pre_disable_intercept_delay_ms); dpcd_status = core_link_write_dpcd( link, vendor_lttpr_write_address, @@ -591,10 +596,9 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence( const uint8_t vendor_lttpr_write_data_adicora_eq1[4] = {0x1, 0x55, 0x63, 0x2E}; const uint8_t vendor_lttpr_write_data_adicora_eq2[4] = {0x1, 0x55, 0x63, 0x01}; const uint8_t vendor_lttpr_write_data_adicora_eq3[4] = {0x1, 0x55, 0x63, 0x68}; - uint32_t pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa; uint8_t vendor_lttpr_write_data_vs[4] = {0x1, 0x51, 0x63, 0x0}; uint8_t vendor_lttpr_write_data_pe[4] = {0x1, 0x52, 0x63, 0x0}; - + uint32_t pre_disable_intercept_delay_ms = 0; uint32_t vendor_lttpr_write_address = 0xF004F; enum link_training_result status = LINK_TRAINING_SUCCESS; uint8_t lane = 0; @@ -603,6 +607,10 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence( uint8_t toggle_rate; uint8_t rate; + if (link->local_sink) + pre_disable_intercept_delay_ms = + link->local_sink->edid_caps.panel_patch.delay_disable_aux_intercept_ms; + /* Only 8b/10b is supported */ ASSERT(link_dp_get_encoding_format(<_settings->link_settings) == DP_8b_10b_ENCODING); @@ -618,7 +626,7 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence( /* Certain display and cable configuration require extra delay */ if (offset > 2) - pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa * 2; + pre_disable_intercept_delay_ms = pre_disable_intercept_delay_ms * 2; } /* Vendor specific: Reset lane settings */ @@ -739,7 +747,8 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence( 0); /* Vendor specific: Disable intercept */ for (i = 0; i < max_vendor_dpcd_retries; i++) { - msleep(pre_disable_intercept_delay_ms); + if (pre_disable_intercept_delay_ms != 0) + msleep(pre_disable_intercept_delay_ms); dpcd_status = core_link_write_dpcd( link, vendor_lttpr_write_address, diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h index 554ab48d4e647..9cad599b27094 100644 --- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h +++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h @@ -364,6 +364,8 @@ struct dmub_srv_hw_funcs { bool (*is_supported)(struct dmub_srv *dmub); + bool (*is_psrsu_supported)(struct dmub_srv *dmub); + bool (*is_hw_init)(struct dmub_srv *dmub); bool (*is_phy_init)(struct dmub_srv *dmub); diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h index 598fa1de54ce3..1c55d3b01f53e 100644 --- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h +++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h @@ -360,7 +360,7 @@ union dmub_fw_boot_status { uint32_t optimized_init_done : 1; /**< 1 if optimized init done */ uint32_t restore_required : 1; /**< 1 if driver should call restore */ uint32_t defer_load : 1; /**< 1 if VBIOS data is deferred programmed */ - uint32_t reserved : 1; + uint32_t fams_enabled : 1; /**< 1 if VBIOS data is deferred programmed */ uint32_t detection_required: 1; /**< if detection need to be triggered by driver */ uint32_t hw_power_init_done: 1; /**< 1 if hw power init is completed */ } bits; /**< status bits */ diff --git a/drivers/gpu/drm/amd/display/dmub/src/Makefile b/drivers/gpu/drm/amd/display/dmub/src/Makefile index 0589ad4778eea..caf095aca8f3f 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/Makefile +++ b/drivers/gpu/drm/amd/display/dmub/src/Makefile @@ -22,7 +22,7 @@ DMUB = dmub_srv.o dmub_srv_stat.o dmub_reg.o dmub_dcn20.o dmub_dcn21.o DMUB += dmub_dcn30.o dmub_dcn301.o dmub_dcn302.o dmub_dcn303.o -DMUB += dmub_dcn31.o dmub_dcn315.o dmub_dcn316.o +DMUB += dmub_dcn31.o dmub_dcn314.o dmub_dcn315.o dmub_dcn316.o DMUB += dmub_dcn32.o AMD_DAL_DMUB = $(addprefix $(AMDDALPATH)/dmub/src/,$(DMUB)) diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c index c90b9ee42e126..89d24fb7024e2 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c @@ -297,6 +297,11 @@ bool dmub_dcn31_is_supported(struct dmub_srv *dmub) return supported; } +bool dmub_dcn31_is_psrsu_supported(struct dmub_srv *dmub) +{ + return dmub->fw_version >= DMUB_FW_VERSION(4, 0, 59); +} + void dmub_dcn31_set_gpint(struct dmub_srv *dmub, union dmub_gpint_data_register reg) { diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h index f6db6f89d45dc..eb62410941473 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h @@ -219,6 +219,8 @@ bool dmub_dcn31_is_hw_init(struct dmub_srv *dmub); bool dmub_dcn31_is_supported(struct dmub_srv *dmub); +bool dmub_dcn31_is_psrsu_supported(struct dmub_srv *dmub); + void dmub_dcn31_set_gpint(struct dmub_srv *dmub, union dmub_gpint_data_register reg); diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.c new file mode 100644 index 0000000000000..f161aeb7e7c4a --- /dev/null +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.c @@ -0,0 +1,67 @@ +/* + * Copyright 2021 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include "../dmub_srv.h" +#include "dmub_reg.h" +#include "dmub_dcn314.h" + +#include "dcn/dcn_3_1_4_offset.h" +#include "dcn/dcn_3_1_4_sh_mask.h" + +#define DCN_BASE__INST0_SEG0 0x00000012 +#define DCN_BASE__INST0_SEG1 0x000000C0 +#define DCN_BASE__INST0_SEG2 0x000034C0 +#define DCN_BASE__INST0_SEG3 0x00009000 +#define DCN_BASE__INST0_SEG4 0x02403C00 +#define DCN_BASE__INST0_SEG5 0 + +#define BASE_INNER(seg) DCN_BASE__INST0_SEG##seg +#define CTX dmub +#define REGS dmub->regs_dcn31 +#define REG_OFFSET_EXP(reg_name) (BASE(reg##reg_name##_BASE_IDX) + reg##reg_name) + +/* Registers. */ + +const struct dmub_srv_dcn31_regs dmub_srv_dcn314_regs = { +#define DMUB_SR(reg) REG_OFFSET_EXP(reg), + { + DMUB_DCN31_REGS() + DMCUB_INTERNAL_REGS() + }, +#undef DMUB_SR + +#define DMUB_SF(reg, field) FD_MASK(reg, field), + { DMUB_DCN31_FIELDS() }, +#undef DMUB_SF + +#define DMUB_SF(reg, field) FD_SHIFT(reg, field), + { DMUB_DCN31_FIELDS() }, +#undef DMUB_SF +}; + +bool dmub_dcn314_is_psrsu_supported(struct dmub_srv *dmub) +{ + return dmub->fw_version >= DMUB_FW_VERSION(8, 0, 16); +} diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.h new file mode 100644 index 0000000000000..f213bd82c9110 --- /dev/null +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn314.h @@ -0,0 +1,35 @@ +/* + * Copyright 2021 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#ifndef _DMUB_DCN314_H_ +#define _DMUB_DCN314_H_ + +#include "dmub_dcn31.h" + +extern const struct dmub_srv_dcn31_regs dmub_srv_dcn314_regs; + +bool dmub_dcn314_is_psrsu_supported(struct dmub_srv *dmub); + +#endif /* _DMUB_DCN314_H_ */ diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c index 92c18bfb98b3b..0dab22d794808 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c @@ -32,6 +32,7 @@ #include "dmub_dcn302.h" #include "dmub_dcn303.h" #include "dmub_dcn31.h" +#include "dmub_dcn314.h" #include "dmub_dcn315.h" #include "dmub_dcn316.h" #include "dmub_dcn32.h" @@ -226,12 +227,17 @@ static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic) case DMUB_ASIC_DCN314: case DMUB_ASIC_DCN315: case DMUB_ASIC_DCN316: - if (asic == DMUB_ASIC_DCN315) + if (asic == DMUB_ASIC_DCN314) { + dmub->regs_dcn31 = &dmub_srv_dcn314_regs; + funcs->is_psrsu_supported = dmub_dcn314_is_psrsu_supported; + } else if (asic == DMUB_ASIC_DCN315) { dmub->regs_dcn31 = &dmub_srv_dcn315_regs; - else if (asic == DMUB_ASIC_DCN316) + } else if (asic == DMUB_ASIC_DCN316) { dmub->regs_dcn31 = &dmub_srv_dcn316_regs; - else + } else { dmub->regs_dcn31 = &dmub_srv_dcn31_regs; + funcs->is_psrsu_supported = dmub_dcn31_is_psrsu_supported; + } funcs->reset = dmub_dcn31_reset; funcs->reset_release = dmub_dcn31_reset_release; funcs->backdoor_load = dmub_dcn31_backdoor_load; diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c index e22fc563b462f..0cda3b276f611 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c @@ -2081,89 +2081,36 @@ static int sienna_cichlid_display_disable_memory_clock_switch(struct smu_context return ret; } -static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu, - uint32_t *gen_speed_override, - uint32_t *lane_width_override) -{ - struct amdgpu_device *adev = smu->adev; - - *gen_speed_override = 0xff; - *lane_width_override = 0xff; - - switch (adev->pdev->device) { - case 0x73A0: - case 0x73A1: - case 0x73A2: - case 0x73A3: - case 0x73AB: - case 0x73AE: - /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */ - *lane_width_override = 6; - break; - case 0x73E0: - case 0x73E1: - case 0x73E3: - *lane_width_override = 4; - break; - case 0x7420: - case 0x7421: - case 0x7422: - case 0x7423: - case 0x7424: - *lane_width_override = 3; - break; - default: - break; - } -} - -#define MAX(a, b) ((a) > (b) ? (a) : (b)) - static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, uint32_t pcie_gen_cap, uint32_t pcie_width_cap) { struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; - uint32_t gen_speed_override, lane_width_override; - uint8_t *table_member1, *table_member2; - uint32_t min_gen_speed, max_gen_speed; - uint32_t min_lane_width, max_lane_width; - uint32_t smu_pcie_arg; + u32 smu_pcie_arg; int ret, i; - GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); - GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); - - sienna_cichlid_get_override_pcie_settings(smu, - &gen_speed_override, - &lane_width_override); + /* PCIE gen speed and lane width override */ + if (!amdgpu_device_pcie_dynamic_switching_supported()) { + if (pcie_table->pcie_gen[NUM_LINK_LEVELS - 1] < pcie_gen_cap) + pcie_gen_cap = pcie_table->pcie_gen[NUM_LINK_LEVELS - 1]; - /* PCIE gen speed override */ - if (gen_speed_override != 0xff) { - min_gen_speed = MIN(pcie_gen_cap, gen_speed_override); - max_gen_speed = MIN(pcie_gen_cap, gen_speed_override); - } else { - min_gen_speed = MAX(0, table_member1[0]); - max_gen_speed = MIN(pcie_gen_cap, table_member1[1]); - min_gen_speed = min_gen_speed > max_gen_speed ? - max_gen_speed : min_gen_speed; - } - pcie_table->pcie_gen[0] = min_gen_speed; - pcie_table->pcie_gen[1] = max_gen_speed; + if (pcie_table->pcie_lane[NUM_LINK_LEVELS - 1] < pcie_width_cap) + pcie_width_cap = pcie_table->pcie_lane[NUM_LINK_LEVELS - 1]; - /* PCIE lane width override */ - if (lane_width_override != 0xff) { - min_lane_width = MIN(pcie_width_cap, lane_width_override); - max_lane_width = MIN(pcie_width_cap, lane_width_override); + /* Force all levels to use the same settings */ + for (i = 0; i < NUM_LINK_LEVELS; i++) { + pcie_table->pcie_gen[i] = pcie_gen_cap; + pcie_table->pcie_lane[i] = pcie_width_cap; + } } else { - min_lane_width = MAX(1, table_member2[0]); - max_lane_width = MIN(pcie_width_cap, table_member2[1]); - min_lane_width = min_lane_width > max_lane_width ? - max_lane_width : min_lane_width; + for (i = 0; i < NUM_LINK_LEVELS; i++) { + if (pcie_table->pcie_gen[i] > pcie_gen_cap) + pcie_table->pcie_gen[i] = pcie_gen_cap; + if (pcie_table->pcie_lane[i] > pcie_width_cap) + pcie_table->pcie_lane[i] = pcie_width_cap; + } } - pcie_table->pcie_lane[0] = min_lane_width; - pcie_table->pcie_lane[1] = max_lane_width; for (i = 0; i < NUM_LINK_LEVELS; i++) { smu_pcie_arg = (i << 16 | diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c index 7acf731a69ccf..79e9230fc7960 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c @@ -2454,25 +2454,6 @@ int smu_v13_0_mode1_reset(struct smu_context *smu) return ret; } -/* - * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic - * speed switching. Until we have confirmation from Intel that a specific host - * supports it, it's safer that we keep it disabled for all. - * - * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/ - * https://gitlab.freedesktop.org/drm/amd/-/issues/2663 - */ -static bool smu_v13_0_is_pcie_dynamic_switching_supported(void) -{ -#if IS_ENABLED(CONFIG_X86) - struct cpuinfo_x86 *c = &cpu_data(0); - - if (c->x86_vendor == X86_VENDOR_INTEL) - return false; -#endif - return true; -} - int smu_v13_0_update_pcie_parameters(struct smu_context *smu, uint32_t pcie_gen_cap, uint32_t pcie_width_cap) @@ -2484,7 +2465,7 @@ int smu_v13_0_update_pcie_parameters(struct smu_context *smu, uint32_t smu_pcie_arg; int ret, i; - if (!smu_v13_0_is_pcie_dynamic_switching_supported()) { + if (!amdgpu_device_pcie_dynamic_switching_supported()) { if (pcie_table->pcie_gen[num_of_levels - 1] < pcie_gen_cap) pcie_gen_cap = pcie_table->pcie_gen[num_of_levels - 1]; diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 0c2be83605258..e592c5da70cee 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -353,10 +353,10 @@ EXPORT_SYMBOL(drm_syncobj_replace_fence); */ static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj) { - struct dma_fence *fence = dma_fence_allocate_private_stub(); + struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get()); - if (IS_ERR(fence)) - return PTR_ERR(fence); + if (!fence) + return -ENOMEM; drm_syncobj_replace_fence(syncobj, fence); dma_fence_put(fence); diff --git a/drivers/gpu/drm/i915/display/intel_dpt.c b/drivers/gpu/drm/i915/display/intel_dpt.c index b8027392144de..25b9a0ba29ebc 100644 --- a/drivers/gpu/drm/i915/display/intel_dpt.c +++ b/drivers/gpu/drm/i915/display/intel_dpt.c @@ -166,6 +166,8 @@ struct i915_vma *intel_dpt_pin(struct i915_address_space *vm) i915_vma_get(vma); } + dpt->obj->mm.dirty = true; + atomic_dec(&i915->gpu_error.pending_fb_pin); intel_runtime_pm_put(&i915->runtime_pm, wakeref); @@ -261,7 +263,7 @@ intel_dpt_create(struct intel_framebuffer *fb) dpt_obj = i915_gem_object_create_stolen(i915, size); if (IS_ERR(dpt_obj) && !HAS_LMEM(i915)) { drm_dbg_kms(&i915->drm, "Allocating dpt from smem\n"); - dpt_obj = i915_gem_object_create_internal(i915, size); + dpt_obj = i915_gem_object_create_shmem(i915, size); } if (IS_ERR(dpt_obj)) return ERR_CAST(dpt_obj); diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 99f39a5feca15..e86e75971ec60 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1190,8 +1190,10 @@ static int igt_write_huge(struct drm_i915_private *i915, * times in succession a possibility by enlarging the permutation array. */ order = i915_random_order(count * count, &prng); - if (!order) - return -ENOMEM; + if (!order) { + err = -ENOMEM; + goto out; + } max_page_size = rounddown_pow_of_two(obj->mm.page_sizes.sg); max = div_u64(max - size, max_page_size); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index a99310b687932..bbb1bf33f98ef 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -89,7 +89,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit * since we've already mapped it once in * submit_reloc() */ - if (WARN_ON(!ptr)) + if (WARN_ON(IS_ERR_OR_NULL(ptr))) return; for (i = 0; i < dwords; i++) { diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h index 790f55e245332..e788ed72eb0d3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h @@ -206,7 +206,7 @@ static const struct a6xx_shader_block { SHADER(A6XX_SP_LB_3_DATA, 0x800), SHADER(A6XX_SP_LB_4_DATA, 0x800), SHADER(A6XX_SP_LB_5_DATA, 0x200), - SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x2000), + SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x800), SHADER(A6XX_SP_CB_LEGACY_DATA, 0x280), SHADER(A6XX_SP_UAV_DATA, 0x80), SHADER(A6XX_SP_INST_TAG, 0x80), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h index e3795995e1454..29bb8ee2bc266 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h @@ -14,19 +14,6 @@ #define DPU_PERF_DEFAULT_MAX_CORE_CLK_RATE 412500000 -/** - * enum dpu_core_perf_data_bus_id - data bus identifier - * @DPU_CORE_PERF_DATA_BUS_ID_MNOC: DPU/MNOC data bus - * @DPU_CORE_PERF_DATA_BUS_ID_LLCC: MNOC/LLCC data bus - * @DPU_CORE_PERF_DATA_BUS_ID_EBI: LLCC/EBI data bus - */ -enum dpu_core_perf_data_bus_id { - DPU_CORE_PERF_DATA_BUS_ID_MNOC, - DPU_CORE_PERF_DATA_BUS_ID_LLCC, - DPU_CORE_PERF_DATA_BUS_ID_EBI, - DPU_CORE_PERF_DATA_BUS_ID_MAX, -}; - /** * struct dpu_core_perf_params - definition of performance parameters * @max_per_pipe_ib: maximum instantaneous bandwidth request diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c index f6270b7a0b140..5afbc16ec5bbb 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c @@ -51,7 +51,7 @@ static const u32 fetch_tbl[SSPP_MAX] = {CTL_INVALID_BIT, 16, 17, 18, 19, CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, 0, - 1, 2, 3, CTL_INVALID_BIT, CTL_INVALID_BIT}; + 1, 2, 3, 4, 5}; static const struct dpu_ctl_cfg *_ctl_offset(enum dpu_ctl ctl, const struct dpu_mdss_cfg *m, @@ -209,6 +209,12 @@ static void dpu_hw_ctl_update_pending_flush_sspp(struct dpu_hw_ctl *ctx, case SSPP_DMA3: ctx->pending_flush_mask |= BIT(25); break; + case SSPP_DMA4: + ctx->pending_flush_mask |= BIT(13); + break; + case SSPP_DMA5: + ctx->pending_flush_mask |= BIT(14); + break; case SSPP_CURSOR0: ctx->pending_flush_mask |= BIT(22); break; diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c index 3ce45b023e637..31deda1c664ad 100644 --- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c +++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c @@ -1087,8 +1087,6 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_8953_cfgs = { const struct msm_dsi_phy_cfg dsi_phy_14nm_2290_cfgs = { .has_phy_lane = true, - .regulator_data = dsi_phy_14nm_17mA_regulators, - .num_regulators = ARRAY_SIZE(dsi_phy_14nm_17mA_regulators), .ops = { .enable = dsi_14nm_phy_enable, .disable = dsi_14nm_phy_disable, diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c index 96599ec3eb783..1a5d4f1c8b422 100644 --- a/drivers/gpu/drm/msm/msm_fence.c +++ b/drivers/gpu/drm/msm/msm_fence.c @@ -191,6 +191,12 @@ msm_fence_init(struct dma_fence *fence, struct msm_fence_context *fctx) f->fctx = fctx; + /* + * Until this point, the fence was just some pre-allocated memory, + * no-one should have taken a reference to it yet. + */ + WARN_ON(kref_read(&fence->refcount)); + dma_fence_init(&f->base, &msm_fence_ops, &fctx->spinlock, fctx->context, ++fctx->last_fence); } diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 9f5933c75e3df..1bd78041b4d0d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -86,7 +86,19 @@ void __msm_gem_submit_destroy(struct kref *kref) } dma_fence_put(submit->user_fence); - dma_fence_put(submit->hw_fence); + + /* + * If the submit is freed before msm_job_run(), then hw_fence is + * just some pre-allocated memory, not a reference counted fence. + * Once the job runs and the hw_fence is initialized, it will + * have a refcount of at least one, since the submit holds a ref + * to the hw_fence. + */ + if (kref_read(&submit->hw_fence->refcount) == 0) { + kfree(submit->hw_fence); + } else { + dma_fence_put(submit->hw_fence); + } put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -890,7 +902,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, * after the job is armed */ if ((args->flags & MSM_SUBMIT_FENCE_SN_IN) && - idr_find(&queue->fence_idr, args->fence)) { + (!args->fence || idr_find(&queue->fence_idr, args->fence))) { spin_unlock(&queue->idr_lock); idr_preload_end(); ret = -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_mdss.c b/drivers/gpu/drm/msm/msm_mdss.c index e8c93731aaa18..4ae6fac20e48c 100644 --- a/drivers/gpu/drm/msm/msm_mdss.c +++ b/drivers/gpu/drm/msm/msm_mdss.c @@ -189,6 +189,7 @@ static int _msm_mdss_irq_domain_add(struct msm_mdss *msm_mdss) #define UBWC_2_0 0x20000000 #define UBWC_3_0 0x30000000 #define UBWC_4_0 0x40000000 +#define UBWC_4_3 0x40030000 static void msm_mdss_setup_ubwc_dec_20(struct msm_mdss *msm_mdss) { @@ -227,7 +228,10 @@ static void msm_mdss_setup_ubwc_dec_40(struct msm_mdss *msm_mdss) writel_relaxed(1, msm_mdss->mmio + UBWC_CTRL_2); writel_relaxed(0, msm_mdss->mmio + UBWC_PREDICTION_MODE); } else { - writel_relaxed(2, msm_mdss->mmio + UBWC_CTRL_2); + if (data->ubwc_dec_version == UBWC_4_3) + writel_relaxed(3, msm_mdss->mmio + UBWC_CTRL_2); + else + writel_relaxed(2, msm_mdss->mmio + UBWC_CTRL_2); writel_relaxed(1, msm_mdss->mmio + UBWC_PREDICTION_MODE); } } @@ -271,6 +275,7 @@ static int msm_mdss_enable(struct msm_mdss *msm_mdss) msm_mdss_setup_ubwc_dec_30(msm_mdss); break; case UBWC_4_0: + case UBWC_4_3: msm_mdss_setup_ubwc_dec_40(msm_mdss); break; default: @@ -561,6 +566,16 @@ static const struct msm_mdss_data sm8250_data = { .macrotile_mode = 1, }; +static const struct msm_mdss_data sm8550_data = { + .ubwc_version = UBWC_4_0, + .ubwc_dec_version = UBWC_4_3, + .ubwc_swizzle = 6, + .ubwc_static = 1, + /* TODO: highest_bank_bit = 2 for LP_DDR4 */ + .highest_bank_bit = 3, + .macrotile_mode = 1, +}; + static const struct of_device_id mdss_dt_match[] = { { .compatible = "qcom,mdss" }, { .compatible = "qcom,msm8998-mdss" }, @@ -575,7 +590,7 @@ static const struct of_device_id mdss_dt_match[] = { { .compatible = "qcom,sm8250-mdss", .data = &sm8250_data }, { .compatible = "qcom,sm8350-mdss", .data = &sm8250_data }, { .compatible = "qcom,sm8450-mdss", .data = &sm8250_data }, - { .compatible = "qcom,sm8550-mdss", .data = &sm8250_data }, + { .compatible = "qcom,sm8550-mdss", .data = &sm8550_data }, {} }; MODULE_DEVICE_TABLE(of, mdss_dt_match); diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 1a1cfd675cc46..7139a522b2f3b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -517,6 +517,12 @@ static bool ttm_bo_evict_swapout_allowable(struct ttm_buffer_object *bo, { bool ret = false; + if (bo->pin_count) { + *locked = false; + *busy = false; + return false; + } + if (bo->base.resv == ctx->resv) { dma_resv_assert_held(bo->base.resv); if (ctx->allow_res_evict) diff --git a/drivers/hwmon/aquacomputer_d5next.c b/drivers/hwmon/aquacomputer_d5next.c index a4fcd4ebf76c2..c2b99fd4f436c 100644 --- a/drivers/hwmon/aquacomputer_d5next.c +++ b/drivers/hwmon/aquacomputer_d5next.c @@ -969,7 +969,7 @@ static int aqc_read(struct device *dev, enum hwmon_sensor_types type, u32 attr, if (ret < 0) return ret; - *val = aqc_percent_to_pwm(ret); + *val = aqc_percent_to_pwm(*val); break; } break; diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c index 7b177b9fbb097..a267b11731a8a 100644 --- a/drivers/hwmon/k10temp.c +++ b/drivers/hwmon/k10temp.c @@ -77,6 +77,13 @@ static DEFINE_MUTEX(nb_smu_ind_mutex); #define ZEN_CUR_TEMP_RANGE_SEL_MASK BIT(19) #define ZEN_CUR_TEMP_TJ_SEL_MASK GENMASK(17, 16) +/* + * AMD's Industrial processor 3255 supports temperature from -40 deg to 105 deg Celsius. + * Use the model name to identify 3255 CPUs and set a flag to display negative temperature. + * Do not round off to zero for negative Tctl or Tdie values if the flag is set + */ +#define AMD_I3255_STR "3255" + struct k10temp_data { struct pci_dev *pdev; void (*read_htcreg)(struct pci_dev *pdev, u32 *regval); @@ -86,6 +93,7 @@ struct k10temp_data { u32 show_temp; bool is_zen; u32 ccd_offset; + bool disp_negative; }; #define TCTL_BIT 0 @@ -204,12 +212,12 @@ static int k10temp_read_temp(struct device *dev, u32 attr, int channel, switch (channel) { case 0: /* Tctl */ *val = get_raw_temp(data); - if (*val < 0) + if (*val < 0 && !data->disp_negative) *val = 0; break; case 1: /* Tdie */ *val = get_raw_temp(data) - data->temp_offset; - if (*val < 0) + if (*val < 0 && !data->disp_negative) *val = 0; break; case 2 ... 13: /* Tccd{1-12} */ @@ -405,6 +413,11 @@ static int k10temp_probe(struct pci_dev *pdev, const struct pci_device_id *id) data->pdev = pdev; data->show_temp |= BIT(TCTL_BIT); /* Always show Tctl */ + if (boot_cpu_data.x86 == 0x17 && + strstr(boot_cpu_data.x86_model_id, AMD_I3255_STR)) { + data->disp_negative = true; + } + if (boot_cpu_data.x86 == 0x15 && ((boot_cpu_data.x86_model & 0xf0) == 0x60 || (boot_cpu_data.x86_model & 0xf0) == 0x70)) { diff --git a/drivers/hwmon/nct7802.c b/drivers/hwmon/nct7802.c index a175f8283695e..e64c12d90a042 100644 --- a/drivers/hwmon/nct7802.c +++ b/drivers/hwmon/nct7802.c @@ -725,7 +725,7 @@ static umode_t nct7802_temp_is_visible(struct kobject *kobj, if (index >= 38 && index < 46 && !(reg & 0x01)) /* PECI 0 */ return 0; - if (index >= 0x46 && (!(reg & 0x02))) /* PECI 1 */ + if (index >= 46 && !(reg & 0x02)) /* PECI 1 */ return 0; return attr->mode; diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c index 9d14954da94fb..8e54be26c88f3 100644 --- a/drivers/hwmon/pmbus/pmbus_core.c +++ b/drivers/hwmon/pmbus/pmbus_core.c @@ -2745,9 +2745,8 @@ static const struct pmbus_status_category __maybe_unused pmbus_status_flag_map[] }, }; -static int _pmbus_is_enabled(struct device *dev, u8 page) +static int _pmbus_is_enabled(struct i2c_client *client, u8 page) { - struct i2c_client *client = to_i2c_client(dev->parent); int ret; ret = _pmbus_read_byte_data(client, page, PMBUS_OPERATION); @@ -2758,17 +2757,16 @@ static int _pmbus_is_enabled(struct device *dev, u8 page) return !!(ret & PB_OPERATION_CONTROL_ON); } -static int __maybe_unused pmbus_is_enabled(struct device *dev, u8 page) +static int __maybe_unused pmbus_is_enabled(struct i2c_client *client, u8 page) { - struct i2c_client *client = to_i2c_client(dev->parent); struct pmbus_data *data = i2c_get_clientdata(client); int ret; mutex_lock(&data->update_lock); - ret = _pmbus_is_enabled(dev, page); + ret = _pmbus_is_enabled(client, page); mutex_unlock(&data->update_lock); - return !!(ret & PB_OPERATION_CONTROL_ON); + return ret; } #define to_dev_attr(_dev_attr) \ @@ -2844,7 +2842,7 @@ static int _pmbus_get_flags(struct pmbus_data *data, u8 page, unsigned int *flag if (status < 0) return status; - if (_pmbus_is_enabled(dev, page)) { + if (_pmbus_is_enabled(client, page)) { if (status & PB_STATUS_OFF) { *flags |= REGULATOR_ERROR_FAIL; *event |= REGULATOR_EVENT_FAIL; @@ -2898,7 +2896,10 @@ static int __maybe_unused pmbus_get_flags(struct pmbus_data *data, u8 page, unsi #if IS_ENABLED(CONFIG_REGULATOR) static int pmbus_regulator_is_enabled(struct regulator_dev *rdev) { - return pmbus_is_enabled(rdev_get_dev(rdev), rdev_get_id(rdev)); + struct device *dev = rdev_get_dev(rdev); + struct i2c_client *client = to_i2c_client(dev->parent); + + return pmbus_is_enabled(client, rdev_get_id(rdev)); } static int _pmbus_regulator_on_off(struct regulator_dev *rdev, bool enable) @@ -2945,6 +2946,7 @@ static int pmbus_regulator_get_status(struct regulator_dev *rdev) struct pmbus_data *data = i2c_get_clientdata(client); u8 page = rdev_get_id(rdev); int status, ret; + int event; mutex_lock(&data->update_lock); status = pmbus_get_status(client, page, PMBUS_STATUS_WORD); @@ -2964,7 +2966,7 @@ static int pmbus_regulator_get_status(struct regulator_dev *rdev) goto unlock; } - ret = pmbus_regulator_get_error_flags(rdev, &status); + ret = _pmbus_get_flags(data, rdev_get_id(rdev), &status, &event, false); if (ret) goto unlock; diff --git a/drivers/i2c/busses/i2c-ibm_iic.c b/drivers/i2c/busses/i2c-ibm_iic.c index eeb80e34f9ad7..de3b609515e08 100644 --- a/drivers/i2c/busses/i2c-ibm_iic.c +++ b/drivers/i2c/busses/i2c-ibm_iic.c @@ -694,10 +694,8 @@ static int iic_probe(struct platform_device *ofdev) int ret; dev = kzalloc(sizeof(*dev), GFP_KERNEL); - if (!dev) { - dev_err(&ofdev->dev, "failed to allocate device data\n"); + if (!dev) return -ENOMEM; - } platform_set_drvdata(ofdev, dev); diff --git a/drivers/i2c/busses/i2c-nomadik.c b/drivers/i2c/busses/i2c-nomadik.c index a2d12a5b1c34c..9c5d66bd6dc1c 100644 --- a/drivers/i2c/busses/i2c-nomadik.c +++ b/drivers/i2c/busses/i2c-nomadik.c @@ -970,12 +970,10 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id) struct i2c_vendor_data *vendor = id->data; u32 max_fifo_threshold = (vendor->fifodepth / 2) - 1; - dev = devm_kzalloc(&adev->dev, sizeof(struct nmk_i2c_dev), GFP_KERNEL); - if (!dev) { - dev_err(&adev->dev, "cannot allocate memory\n"); - ret = -ENOMEM; - goto err_no_mem; - } + dev = devm_kzalloc(&adev->dev, sizeof(*dev), GFP_KERNEL); + if (!dev) + return -ENOMEM; + dev->vendor = vendor; dev->adev = adev; nmk_i2c_of_probe(np, dev); @@ -996,30 +994,21 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id) dev->virtbase = devm_ioremap(&adev->dev, adev->res.start, resource_size(&adev->res)); - if (!dev->virtbase) { - ret = -ENOMEM; - goto err_no_mem; - } + if (!dev->virtbase) + return -ENOMEM; dev->irq = adev->irq[0]; ret = devm_request_irq(&adev->dev, dev->irq, i2c_irq_handler, 0, DRIVER_NAME, dev); if (ret) { dev_err(&adev->dev, "cannot claim the irq %d\n", dev->irq); - goto err_no_mem; + return ret; } - dev->clk = devm_clk_get(&adev->dev, NULL); + dev->clk = devm_clk_get_enabled(&adev->dev, NULL); if (IS_ERR(dev->clk)) { - dev_err(&adev->dev, "could not get i2c clock\n"); - ret = PTR_ERR(dev->clk); - goto err_no_mem; - } - - ret = clk_prepare_enable(dev->clk); - if (ret) { - dev_err(&adev->dev, "can't prepare_enable clock\n"); - goto err_no_mem; + dev_err(&adev->dev, "could enable i2c clock\n"); + return PTR_ERR(dev->clk); } init_hw(dev); @@ -1042,22 +1031,15 @@ static int nmk_i2c_probe(struct amba_device *adev, const struct amba_id *id) ret = i2c_add_adapter(adap); if (ret) - goto err_no_adap; + return ret; pm_runtime_put(&adev->dev); return 0; - - err_no_adap: - clk_disable_unprepare(dev->clk); - err_no_mem: - - return ret; } static void nmk_i2c_remove(struct amba_device *adev) { - struct resource *res = &adev->res; struct nmk_i2c_dev *dev = amba_get_drvdata(adev); i2c_del_adapter(&dev->adap); @@ -1066,8 +1048,6 @@ static void nmk_i2c_remove(struct amba_device *adev) clear_all_interrupts(dev); /* disable the controller */ i2c_clr_bit(dev->virtbase + I2C_CR, I2C_CR_PE); - clk_disable_unprepare(dev->clk); - release_mem_region(res->start, resource_size(res)); } static struct i2c_vendor_data vendor_stn8815 = { diff --git a/drivers/i2c/busses/i2c-sh7760.c b/drivers/i2c/busses/i2c-sh7760.c index 319d1fa617c88..051b904cb35f6 100644 --- a/drivers/i2c/busses/i2c-sh7760.c +++ b/drivers/i2c/busses/i2c-sh7760.c @@ -443,9 +443,8 @@ static int sh7760_i2c_probe(struct platform_device *pdev) goto out0; } - id = kzalloc(sizeof(struct cami2c), GFP_KERNEL); + id = kzalloc(sizeof(*id), GFP_KERNEL); if (!id) { - dev_err(&pdev->dev, "no mem for private data\n"); ret = -ENOMEM; goto out0; } diff --git a/drivers/i2c/busses/i2c-tiny-usb.c b/drivers/i2c/busses/i2c-tiny-usb.c index 7279ca0eaa2d0..d1fa9ff5aeab4 100644 --- a/drivers/i2c/busses/i2c-tiny-usb.c +++ b/drivers/i2c/busses/i2c-tiny-usb.c @@ -226,10 +226,8 @@ static int i2c_tiny_usb_probe(struct usb_interface *interface, /* allocate memory for our device state and initialize it */ dev = kzalloc(sizeof(*dev), GFP_KERNEL); - if (dev == NULL) { - dev_err(&interface->dev, "Out of memory\n"); + if (!dev) goto error; - } dev->usb_dev = usb_get_dev(interface_to_usbdev(interface)); dev->interface = interface; diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 6b3f4384e46ac..a60e587aea817 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -4062,6 +4062,8 @@ static int resolve_prepare_src(struct rdma_id_private *id_priv, RDMA_CM_ADDR_QUERY))) return -EINVAL; + } else { + memcpy(cma_dst_addr(id_priv), dst_addr, rdma_addr_size(dst_addr)); } if (cma_family(id_priv) != dst_addr->sa_family) { diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 952811c40c54b..ebe6852c40e8c 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -797,7 +797,10 @@ fail: int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata) { struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp); + struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp; struct bnxt_re_dev *rdev = qp->rdev; + struct bnxt_qplib_nq *scq_nq = NULL; + struct bnxt_qplib_nq *rcq_nq = NULL; unsigned int flags; int rc; @@ -831,6 +834,15 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata) ib_umem_release(qp->rumem); ib_umem_release(qp->sumem); + /* Flush all the entries of notification queue associated with + * given qp. + */ + scq_nq = qplib_qp->scq->nq; + rcq_nq = qplib_qp->rcq->nq; + bnxt_re_synchronize_nq(scq_nq); + if (scq_nq != rcq_nq) + bnxt_re_synchronize_nq(rcq_nq); + return 0; } diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index 55f092c2c8a88..b34cc500f51f3 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -381,6 +381,24 @@ static void bnxt_qplib_service_nq(struct tasklet_struct *t) spin_unlock_bh(&hwq->lock); } +/* bnxt_re_synchronize_nq - self polling notification queue. + * @nq - notification queue pointer + * + * This function will start polling entries of a given notification queue + * for all pending entries. + * This function is useful to synchronize notification entries while resources + * are going away. + */ + +void bnxt_re_synchronize_nq(struct bnxt_qplib_nq *nq) +{ + int budget = nq->budget; + + nq->budget = nq->hwq.max_elements; + bnxt_qplib_service_nq(&nq->nq_tasklet); + nq->budget = budget; +} + static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance) { struct bnxt_qplib_nq *nq = dev_instance; @@ -402,19 +420,19 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill) if (!nq->requested) return; - tasklet_disable(&nq->nq_tasklet); + nq->requested = false; /* Mask h/w interrupt */ bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, false); /* Sync with last running IRQ handler */ synchronize_irq(nq->msix_vec); - if (kill) - tasklet_kill(&nq->nq_tasklet); - irq_set_affinity_hint(nq->msix_vec, NULL); free_irq(nq->msix_vec, nq); kfree(nq->name); nq->name = NULL; - nq->requested = false; + + if (kill) + tasklet_kill(&nq->nq_tasklet); + tasklet_disable(&nq->nq_tasklet); } void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq) diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h index a42820821c473..404b851091ca2 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h @@ -553,6 +553,7 @@ int bnxt_qplib_process_flush_list(struct bnxt_qplib_cq *cq, struct bnxt_qplib_cqe *cqe, int num_cqes); void bnxt_qplib_flush_cqn_wq(struct bnxt_qplib_qp *qp); +void bnxt_re_synchronize_nq(struct bnxt_qplib_nq *nq); static inline void *bnxt_qplib_get_swqe(struct bnxt_qplib_q *que, u32 *swq_idx) { diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index c11b8e708844c..05683ce64887f 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -53,70 +53,139 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t); -/* Hardware communication channel */ -static int __wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie) +/** + * bnxt_qplib_map_rc - map return type based on opcode + * @opcode - roce slow path opcode + * + * In some cases like firmware halt is detected, the driver is supposed to + * remap the error code of the timed out command. + * + * It is not safe to assume hardware is really inactive so certain opcodes + * like destroy qp etc are not safe to be returned success, but this function + * will be called when FW already reports a timeout. This would be possible + * only when FW crashes and resets. This will clear all the HW resources. + * + * Returns: + * 0 to communicate success to caller. + * Non zero error code to communicate failure to caller. + */ +static int bnxt_qplib_map_rc(u8 opcode) +{ + switch (opcode) { + case CMDQ_BASE_OPCODE_DESTROY_QP: + case CMDQ_BASE_OPCODE_DESTROY_SRQ: + case CMDQ_BASE_OPCODE_DESTROY_CQ: + case CMDQ_BASE_OPCODE_DEALLOCATE_KEY: + case CMDQ_BASE_OPCODE_DEREGISTER_MR: + case CMDQ_BASE_OPCODE_DELETE_GID: + case CMDQ_BASE_OPCODE_DESTROY_QP1: + case CMDQ_BASE_OPCODE_DESTROY_AH: + case CMDQ_BASE_OPCODE_DEINITIALIZE_FW: + case CMDQ_BASE_OPCODE_MODIFY_ROCE_CC: + case CMDQ_BASE_OPCODE_SET_LINK_AGGR_MODE: + return 0; + default: + return -ETIMEDOUT; + } +} + +/** + * __wait_for_resp - Don't hold the cpu context and wait for response + * @rcfw - rcfw channel instance of rdev + * @cookie - cookie to track the command + * @opcode - rcfw submitted for given opcode + * + * Wait for command completion in sleepable context. + * + * Returns: + * 0 if command is completed by firmware. + * Non zero error code for rest of the case. + */ +static int __wait_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie, u8 opcode) { struct bnxt_qplib_cmdq_ctx *cmdq; u16 cbit; - int rc; + int ret; cmdq = &rcfw->cmdq; cbit = cookie % rcfw->cmdq_depth; - rc = wait_event_timeout(cmdq->waitq, - !test_bit(cbit, cmdq->cmdq_bitmap), - msecs_to_jiffies(RCFW_CMD_WAIT_TIME_MS)); - return rc ? 0 : -ETIMEDOUT; + + do { + if (test_bit(ERR_DEVICE_DETACHED, &cmdq->flags)) + return bnxt_qplib_map_rc(opcode); + + /* Non zero means command completed */ + ret = wait_event_timeout(cmdq->waitq, + !test_bit(cbit, cmdq->cmdq_bitmap), + msecs_to_jiffies(10000)); + + if (!test_bit(cbit, cmdq->cmdq_bitmap)) + return 0; + + bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet); + + if (!test_bit(cbit, cmdq->cmdq_bitmap)) + return 0; + + } while (true); }; -static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie) +/** + * __block_for_resp - hold the cpu context and wait for response + * @rcfw - rcfw channel instance of rdev + * @cookie - cookie to track the command + * @opcode - rcfw submitted for given opcode + * + * This function will hold the cpu (non-sleepable context) and + * wait for command completion. Maximum holding interval is 8 second. + * + * Returns: + * -ETIMEOUT if command is not completed in specific time interval. + * 0 if command is completed by firmware. + */ +static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie, u8 opcode) { - u32 count = RCFW_BLOCKED_CMD_WAIT_COUNT; - struct bnxt_qplib_cmdq_ctx *cmdq; + struct bnxt_qplib_cmdq_ctx *cmdq = &rcfw->cmdq; + unsigned long issue_time = 0; u16 cbit; - cmdq = &rcfw->cmdq; cbit = cookie % rcfw->cmdq_depth; - if (!test_bit(cbit, cmdq->cmdq_bitmap)) - goto done; + issue_time = jiffies; + do { + if (test_bit(ERR_DEVICE_DETACHED, &cmdq->flags)) + return bnxt_qplib_map_rc(opcode); + udelay(1); + bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet); - } while (test_bit(cbit, cmdq->cmdq_bitmap) && --count); -done: - return count ? 0 : -ETIMEDOUT; + if (!test_bit(cbit, cmdq->cmdq_bitmap)) + return 0; + + } while (time_before(jiffies, issue_time + (8 * HZ))); + + return -ETIMEDOUT; }; static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct bnxt_qplib_cmdqmsg *msg) { - struct bnxt_qplib_cmdq_ctx *cmdq = &rcfw->cmdq; - struct bnxt_qplib_hwq *hwq = &cmdq->hwq; + u32 bsize, opcode, free_slots, required_slots; + struct bnxt_qplib_cmdq_ctx *cmdq; struct bnxt_qplib_crsqe *crsqe; struct bnxt_qplib_cmdqe *cmdqe; + struct bnxt_qplib_hwq *hwq; u32 sw_prod, cmdq_prod; struct pci_dev *pdev; unsigned long flags; - u32 bsize, opcode; u16 cookie, cbit; u8 *preq; + cmdq = &rcfw->cmdq; + hwq = &cmdq->hwq; pdev = rcfw->pdev; opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - if (!test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && - (opcode != CMDQ_BASE_OPCODE_QUERY_FUNC && - opcode != CMDQ_BASE_OPCODE_INITIALIZE_FW && - opcode != CMDQ_BASE_OPCODE_QUERY_VERSION)) { - dev_err(&pdev->dev, - "RCFW not initialized, reject opcode 0x%x\n", opcode); - return -EINVAL; - } - - if (test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && - opcode == CMDQ_BASE_OPCODE_INITIALIZE_FW) { - dev_err(&pdev->dev, "RCFW already initialized!\n"); - return -EINVAL; - } if (test_bit(FIRMWARE_TIMED_OUT, &cmdq->flags)) return -ETIMEDOUT; @@ -125,40 +194,37 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, * cmdqe */ spin_lock_irqsave(&hwq->lock, flags); - if (msg->req->cmd_size >= HWQ_FREE_SLOTS(hwq)) { - dev_err(&pdev->dev, "RCFW: CMDQ is full!\n"); + required_slots = bnxt_qplib_get_cmd_slots(msg->req); + free_slots = HWQ_FREE_SLOTS(hwq); + cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE; + cbit = cookie % rcfw->cmdq_depth; + + if (required_slots >= free_slots || + test_bit(cbit, cmdq->cmdq_bitmap)) { + dev_info_ratelimited(&pdev->dev, + "CMDQ is full req/free %d/%d!", + required_slots, free_slots); spin_unlock_irqrestore(&hwq->lock, flags); return -EAGAIN; } - - - cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE; - cbit = cookie % rcfw->cmdq_depth; if (msg->block) cookie |= RCFW_CMD_IS_BLOCKING; - set_bit(cbit, cmdq->cmdq_bitmap); __set_cmdq_base_cookie(msg->req, msg->req_sz, cpu_to_le16(cookie)); crsqe = &rcfw->crsqe_tbl[cbit]; - if (crsqe->resp) { - spin_unlock_irqrestore(&hwq->lock, flags); - return -EBUSY; - } - - /* change the cmd_size to the number of 16byte cmdq unit. - * req->cmd_size is modified here - */ bsize = bnxt_qplib_set_cmd_slots(msg->req); - - memset(msg->resp, 0, sizeof(*msg->resp)); + crsqe->free_slots = free_slots; crsqe->resp = (struct creq_qp_event *)msg->resp; crsqe->resp->cookie = cpu_to_le16(cookie); crsqe->req_size = __get_cmdq_base_cmd_size(msg->req, msg->req_sz); if (__get_cmdq_base_resp_size(msg->req, msg->req_sz) && msg->sb) { struct bnxt_qplib_rcfw_sbuf *sbuf = msg->sb; - __set_cmdq_base_resp_addr(msg->req, msg->req_sz, cpu_to_le64(sbuf->dma_addr)); + + __set_cmdq_base_resp_addr(msg->req, msg->req_sz, + cpu_to_le64(sbuf->dma_addr)); __set_cmdq_base_resp_size(msg->req, msg->req_sz, - ALIGN(sbuf->size, BNXT_QPLIB_CMDQE_UNITS)); + ALIGN(sbuf->size, + BNXT_QPLIB_CMDQE_UNITS)); } preq = (u8 *)msg->req; @@ -166,11 +232,6 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, /* Locate the next cmdq slot */ sw_prod = HWQ_CMP(hwq->prod, hwq); cmdqe = bnxt_qplib_get_qe(hwq, sw_prod, NULL); - if (!cmdqe) { - dev_err(&pdev->dev, - "RCFW request failed with no cmdqe!\n"); - goto done; - } /* Copy a segment of the req cmd to the cmdq */ memset(cmdqe, 0, sizeof(*cmdqe)); memcpy(cmdqe, preq, min_t(u32, bsize, sizeof(*cmdqe))); @@ -194,45 +255,121 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, wmb(); writel(cmdq_prod, cmdq->cmdq_mbox.prod); writel(RCFW_CMDQ_TRIG_VAL, cmdq->cmdq_mbox.db); -done: spin_unlock_irqrestore(&hwq->lock, flags); /* Return the CREQ response pointer */ return 0; } -int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, - struct bnxt_qplib_cmdqmsg *msg) +/** + * __poll_for_resp - self poll completion for rcfw command + * @rcfw - rcfw channel instance of rdev + * @cookie - cookie to track the command + * @opcode - rcfw submitted for given opcode + * + * It works same as __wait_for_resp except this function will + * do self polling in sort interval since interrupt is disabled. + * This function can not be called from non-sleepable context. + * + * Returns: + * -ETIMEOUT if command is not completed in specific time interval. + * 0 if command is completed by firmware. + */ +static int __poll_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie, + u8 opcode) +{ + struct bnxt_qplib_cmdq_ctx *cmdq = &rcfw->cmdq; + unsigned long issue_time; + u16 cbit; + + cbit = cookie % rcfw->cmdq_depth; + issue_time = jiffies; + + do { + if (test_bit(ERR_DEVICE_DETACHED, &cmdq->flags)) + return bnxt_qplib_map_rc(opcode); + + usleep_range(1000, 1001); + + bnxt_qplib_service_creq(&rcfw->creq.creq_tasklet); + if (!test_bit(cbit, cmdq->cmdq_bitmap)) + return 0; + if (jiffies_to_msecs(jiffies - issue_time) > 10000) + return -ETIMEDOUT; + } while (true); +}; + +static int __send_message_basic_sanity(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) +{ + struct bnxt_qplib_cmdq_ctx *cmdq; + u32 opcode; + + cmdq = &rcfw->cmdq; + opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); + + /* Prevent posting if f/w is not in a state to process */ + if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) + return -ENXIO; + + if (test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && + opcode == CMDQ_BASE_OPCODE_INITIALIZE_FW) { + dev_err(&rcfw->pdev->dev, "QPLIB: RCFW already initialized!"); + return -EINVAL; + } + + if (!test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && + (opcode != CMDQ_BASE_OPCODE_QUERY_FUNC && + opcode != CMDQ_BASE_OPCODE_INITIALIZE_FW && + opcode != CMDQ_BASE_OPCODE_QUERY_VERSION)) { + dev_err(&rcfw->pdev->dev, + "QPLIB: RCFW not initialized, reject opcode 0x%x", + opcode); + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * __bnxt_qplib_rcfw_send_message - qplib interface to send + * and complete rcfw command. + * @rcfw - rcfw channel instance of rdev + * @msg - qplib message internal + * + * This function does not account shadow queue depth. It will send + * all the command unconditionally as long as send queue is not full. + * + * Returns: + * 0 if command completed by firmware. + * Non zero if the command is not completed by firmware. + */ +static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) { struct creq_qp_event *evnt = (struct creq_qp_event *)msg->resp; u16 cookie; - u8 opcode, retry_cnt = 0xFF; int rc = 0; + u8 opcode; - /* Prevent posting if f/w is not in a state to process */ - if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) - return 0; + opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - do { - opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - rc = __send_message(rcfw, msg); - cookie = le16_to_cpu(__get_cmdq_base_cookie(msg->req, msg->req_sz)) & - RCFW_MAX_COOKIE_VALUE; - if (!rc) - break; - if (!retry_cnt || (rc != -EAGAIN && rc != -EBUSY)) { - /* send failed */ - dev_err(&rcfw->pdev->dev, "cmdq[%#x]=%#x send failed\n", - cookie, opcode); - return rc; - } - msg->block ? mdelay(1) : usleep_range(500, 1000); + rc = __send_message_basic_sanity(rcfw, msg); + if (rc) + return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc; - } while (retry_cnt--); + rc = __send_message(rcfw, msg); + if (rc) + return rc; + + cookie = le16_to_cpu(__get_cmdq_base_cookie(msg->req, msg->req_sz)) + & RCFW_MAX_COOKIE_VALUE; if (msg->block) - rc = __block_for_resp(rcfw, cookie); + rc = __block_for_resp(rcfw, cookie, opcode); + else if (atomic_read(&rcfw->rcfw_intr_enabled)) + rc = __wait_for_resp(rcfw, cookie, opcode); else - rc = __wait_for_resp(rcfw, cookie); + rc = __poll_for_resp(rcfw, cookie, opcode); if (rc) { /* timed out */ dev_err(&rcfw->pdev->dev, "cmdq[%#x]=%#x timedout (%d)msec\n", @@ -250,6 +387,48 @@ int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, return rc; } + +/** + * bnxt_qplib_rcfw_send_message - qplib interface to send + * and complete rcfw command. + * @rcfw - rcfw channel instance of rdev + * @msg - qplib message internal + * + * Driver interact with Firmware through rcfw channel/slow path in two ways. + * a. Blocking rcfw command send. In this path, driver cannot hold + * the context for longer period since it is holding cpu until + * command is not completed. + * b. Non-blocking rcfw command send. In this path, driver can hold the + * context for longer period. There may be many pending command waiting + * for completion because of non-blocking nature. + * + * Driver will use shadow queue depth. Current queue depth of 8K + * (due to size of rcfw message there can be actual ~4K rcfw outstanding) + * is not optimal for rcfw command processing in firmware. + * + * Restrict at max #RCFW_CMD_NON_BLOCKING_SHADOW_QD Non-Blocking rcfw commands. + * Allow all blocking commands until there is no queue full. + * + * Returns: + * 0 if command completed by firmware. + * Non zero if the command is not completed by firmware. + */ +int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) +{ + int ret; + + if (!msg->block) { + down(&rcfw->rcfw_inflight); + ret = __bnxt_qplib_rcfw_send_message(rcfw, msg); + up(&rcfw->rcfw_inflight); + } else { + ret = __bnxt_qplib_rcfw_send_message(rcfw, msg); + } + + return ret; +} + /* Completions */ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw, struct creq_func_event *func_event) @@ -647,18 +826,18 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill) if (!creq->requested) return; - tasklet_disable(&creq->creq_tasklet); + creq->requested = false; /* Mask h/w interrupts */ bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, false); /* Sync with last running IRQ-handler */ synchronize_irq(creq->msix_vec); - if (kill) - tasklet_kill(&creq->creq_tasklet); - free_irq(creq->msix_vec, rcfw); kfree(creq->irq_name); creq->irq_name = NULL; - creq->requested = false; + atomic_set(&rcfw->rcfw_intr_enabled, 0); + if (kill) + tasklet_kill(&creq->creq_tasklet); + tasklet_disable(&creq->creq_tasklet); } void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw) @@ -720,6 +899,7 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector, creq->requested = true; bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, res->cctx, true); + atomic_inc(&rcfw->rcfw_intr_enabled); return 0; } @@ -856,6 +1036,7 @@ int bnxt_qplib_enable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw, return rc; } + sema_init(&rcfw->rcfw_inflight, RCFW_CMD_NON_BLOCKING_SHADOW_QD); bnxt_qplib_start_rcfw(rcfw); return 0; diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index 92f7a25533d3b..4608c0ef07a87 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -67,6 +67,8 @@ static inline void bnxt_qplib_rcfw_cmd_prep(struct cmdq_base *req, req->cmd_size = cmd_size; } +/* Shadow queue depth for non blocking command */ +#define RCFW_CMD_NON_BLOCKING_SHADOW_QD 64 #define RCFW_CMD_WAIT_TIME_MS 20000 /* 20 Seconds timeout */ /* CMDQ elements */ @@ -89,6 +91,26 @@ static inline u32 bnxt_qplib_cmdqe_page_size(u32 depth) return (bnxt_qplib_cmdqe_npages(depth) * PAGE_SIZE); } +/* Get the number of command units required for the req. The + * function returns correct value only if called before + * setting using bnxt_qplib_set_cmd_slots + */ +static inline u32 bnxt_qplib_get_cmd_slots(struct cmdq_base *req) +{ + u32 cmd_units = 0; + + if (HAS_TLV_HEADER(req)) { + struct roce_tlv *tlv_req = (struct roce_tlv *)req; + + cmd_units = tlv_req->total_size; + } else { + cmd_units = (req->cmd_size + BNXT_QPLIB_CMDQE_UNITS - 1) / + BNXT_QPLIB_CMDQE_UNITS; + } + + return cmd_units; +} + static inline u32 bnxt_qplib_set_cmd_slots(struct cmdq_base *req) { u32 cmd_byte = 0; @@ -132,6 +154,8 @@ typedef int (*aeq_handler_t)(struct bnxt_qplib_rcfw *, void *, void *); struct bnxt_qplib_crsqe { struct creq_qp_event *resp; u32 req_size; + /* Free slots at the time of submission */ + u32 free_slots; }; struct bnxt_qplib_rcfw_sbuf { @@ -201,6 +225,8 @@ struct bnxt_qplib_rcfw { u64 oos_prev; u32 init_oos_stats; u32 cmdq_depth; + atomic_t rcfw_intr_enabled; + struct semaphore rcfw_inflight; }; struct bnxt_qplib_cmdqmsg { diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index d88c9184007ea..45e3344daa048 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -2712,13 +2712,13 @@ static int irdma_sc_cq_modify(struct irdma_sc_cq *cq, */ void irdma_check_cqp_progress(struct irdma_cqp_timeout *timeout, struct irdma_sc_dev *dev) { - if (timeout->compl_cqp_cmds != dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]) { - timeout->compl_cqp_cmds = dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]; + u64 completed_ops = atomic64_read(&dev->cqp->completed_ops); + + if (timeout->compl_cqp_cmds != completed_ops) { + timeout->compl_cqp_cmds = completed_ops; timeout->count = 0; - } else { - if (dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS] != - timeout->compl_cqp_cmds) - timeout->count++; + } else if (timeout->compl_cqp_cmds != dev->cqp->requested_ops) { + timeout->count++; } } @@ -2761,7 +2761,7 @@ static int irdma_cqp_poll_registers(struct irdma_sc_cqp *cqp, u32 tail, if (newtail != tail) { /* SUCCESS */ IRDMA_RING_MOVE_TAIL(cqp->sq_ring); - cqp->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]++; + atomic64_inc(&cqp->completed_ops); return 0; } udelay(cqp->dev->hw_attrs.max_sleep_count); @@ -3121,8 +3121,8 @@ int irdma_sc_cqp_init(struct irdma_sc_cqp *cqp, info->dev->cqp = cqp; IRDMA_RING_INIT(cqp->sq_ring, cqp->sq_size); - cqp->dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS] = 0; - cqp->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS] = 0; + cqp->requested_ops = 0; + atomic64_set(&cqp->completed_ops, 0); /* for the cqp commands backlog. */ INIT_LIST_HEAD(&cqp->dev->cqp_cmd_head); @@ -3274,7 +3274,7 @@ __le64 *irdma_sc_cqp_get_next_send_wqe_idx(struct irdma_sc_cqp *cqp, u64 scratch if (ret_code) return NULL; - cqp->dev->cqp_cmd_stats[IRDMA_OP_REQ_CMDS]++; + cqp->requested_ops++; if (!*wqe_idx) cqp->polarity = !cqp->polarity; wqe = cqp->sq_base[*wqe_idx].elem; @@ -3363,6 +3363,9 @@ int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, if (polarity != ccq->cq_uk.polarity) return -ENOENT; + /* Ensure CEQE contents are read after valid bit is checked */ + dma_rmb(); + get_64bit_val(cqe, 8, &qp_ctx); cqp = (struct irdma_sc_cqp *)(unsigned long)qp_ctx; info->error = (bool)FIELD_GET(IRDMA_CQ_ERROR, temp); @@ -3397,7 +3400,7 @@ int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, dma_wmb(); /* make sure shadow area is updated before moving tail */ IRDMA_RING_MOVE_TAIL(cqp->sq_ring); - ccq->dev->cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]++; + atomic64_inc(&cqp->completed_ops); return ret_code; } @@ -4009,13 +4012,17 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, u8 polarity; aeqe = IRDMA_GET_CURRENT_AEQ_ELEM(aeq); - get_64bit_val(aeqe, 0, &compl_ctx); get_64bit_val(aeqe, 8, &temp); polarity = (u8)FIELD_GET(IRDMA_AEQE_VALID, temp); if (aeq->polarity != polarity) return -ENOENT; + /* Ensure AEQE contents are read after valid bit is checked */ + dma_rmb(); + + get_64bit_val(aeqe, 0, &compl_ctx); + print_hex_dump_debug("WQE: AEQ_ENTRY WQE", DUMP_PREFIX_OFFSET, 16, 8, aeqe, 16, false); diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 6014b9d06a9ba..d06e45d2c23fd 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -191,32 +191,30 @@ enum irdma_cqp_op_type { IRDMA_OP_MANAGE_VF_PBLE_BP = 25, IRDMA_OP_QUERY_FPM_VAL = 26, IRDMA_OP_COMMIT_FPM_VAL = 27, - IRDMA_OP_REQ_CMDS = 28, - IRDMA_OP_CMPL_CMDS = 29, - IRDMA_OP_AH_CREATE = 30, - IRDMA_OP_AH_MODIFY = 31, - IRDMA_OP_AH_DESTROY = 32, - IRDMA_OP_MC_CREATE = 33, - IRDMA_OP_MC_DESTROY = 34, - IRDMA_OP_MC_MODIFY = 35, - IRDMA_OP_STATS_ALLOCATE = 36, - IRDMA_OP_STATS_FREE = 37, - IRDMA_OP_STATS_GATHER = 38, - IRDMA_OP_WS_ADD_NODE = 39, - IRDMA_OP_WS_MODIFY_NODE = 40, - IRDMA_OP_WS_DELETE_NODE = 41, - IRDMA_OP_WS_FAILOVER_START = 42, - IRDMA_OP_WS_FAILOVER_COMPLETE = 43, - IRDMA_OP_SET_UP_MAP = 44, - IRDMA_OP_GEN_AE = 45, - IRDMA_OP_QUERY_RDMA_FEATURES = 46, - IRDMA_OP_ALLOC_LOCAL_MAC_ENTRY = 47, - IRDMA_OP_ADD_LOCAL_MAC_ENTRY = 48, - IRDMA_OP_DELETE_LOCAL_MAC_ENTRY = 49, - IRDMA_OP_CQ_MODIFY = 50, + IRDMA_OP_AH_CREATE = 28, + IRDMA_OP_AH_MODIFY = 29, + IRDMA_OP_AH_DESTROY = 30, + IRDMA_OP_MC_CREATE = 31, + IRDMA_OP_MC_DESTROY = 32, + IRDMA_OP_MC_MODIFY = 33, + IRDMA_OP_STATS_ALLOCATE = 34, + IRDMA_OP_STATS_FREE = 35, + IRDMA_OP_STATS_GATHER = 36, + IRDMA_OP_WS_ADD_NODE = 37, + IRDMA_OP_WS_MODIFY_NODE = 38, + IRDMA_OP_WS_DELETE_NODE = 39, + IRDMA_OP_WS_FAILOVER_START = 40, + IRDMA_OP_WS_FAILOVER_COMPLETE = 41, + IRDMA_OP_SET_UP_MAP = 42, + IRDMA_OP_GEN_AE = 43, + IRDMA_OP_QUERY_RDMA_FEATURES = 44, + IRDMA_OP_ALLOC_LOCAL_MAC_ENTRY = 45, + IRDMA_OP_ADD_LOCAL_MAC_ENTRY = 46, + IRDMA_OP_DELETE_LOCAL_MAC_ENTRY = 47, + IRDMA_OP_CQ_MODIFY = 48, /* Must be last entry*/ - IRDMA_MAX_CQP_OPS = 51, + IRDMA_MAX_CQP_OPS = 49, }; /* CQP SQ WQES */ diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index 795f7fd4f2574..457368e324e10 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -191,6 +191,7 @@ static void irdma_set_flush_fields(struct irdma_sc_qp *qp, case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS: case IRDMA_AE_AMP_MWBIND_BIND_DISABLED: case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS: + case IRDMA_AE_AMP_MWBIND_VALID_STAG: qp->flush_code = FLUSH_MW_BIND_ERR; qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR; break; @@ -2075,7 +2076,7 @@ void irdma_cqp_ce_handler(struct irdma_pci_f *rf, struct irdma_sc_cq *cq) cqp_request->compl_info.error = info.error; if (cqp_request->waiting) { - cqp_request->request_done = true; + WRITE_ONCE(cqp_request->request_done, true); wake_up(&cqp_request->waitq); irdma_put_cqp_request(&rf->cqp, cqp_request); } else { diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index def6dd58dcd48..2323962cdeacb 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -161,8 +161,8 @@ struct irdma_cqp_request { void (*callback_fcn)(struct irdma_cqp_request *cqp_request); void *param; struct irdma_cqp_compl_info compl_info; + bool request_done; /* READ/WRITE_ONCE macros operate on it */ bool waiting:1; - bool request_done:1; bool dynamic:1; }; diff --git a/drivers/infiniband/hw/irdma/puda.c b/drivers/infiniband/hw/irdma/puda.c index 4ec9639f1bdbf..562531712ea44 100644 --- a/drivers/infiniband/hw/irdma/puda.c +++ b/drivers/infiniband/hw/irdma/puda.c @@ -230,6 +230,9 @@ static int irdma_puda_poll_info(struct irdma_sc_cq *cq, if (valid_bit != cq_uk->polarity) return -ENOENT; + /* Ensure CQE contents are read after valid bit is checked */ + dma_rmb(); + if (cq->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) ext_valid = (bool)FIELD_GET(IRDMA_CQ_EXTCQE, qword3); @@ -243,6 +246,9 @@ static int irdma_puda_poll_info(struct irdma_sc_cq *cq, if (polarity != cq_uk->polarity) return -ENOENT; + /* Ensure ext CQE contents are read after ext valid bit is checked */ + dma_rmb(); + IRDMA_RING_MOVE_HEAD_NOCHECK(cq_uk->cq_ring); if (!IRDMA_RING_CURRENT_HEAD(cq_uk->cq_ring)) cq_uk->polarity = !cq_uk->polarity; diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 5ee68604e59fc..a20709577ab0a 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -365,6 +365,8 @@ struct irdma_sc_cqp { struct irdma_dcqcn_cc_params dcqcn_params; __le64 *host_ctx; u64 *scratch_array; + u64 requested_ops; + atomic64_t completed_ops; u32 cqp_id; u32 sq_size; u32 hw_sq_size; diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c index dd428d915c175..280d633d4ec4f 100644 --- a/drivers/infiniband/hw/irdma/uk.c +++ b/drivers/infiniband/hw/irdma/uk.c @@ -1161,7 +1161,7 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, } wqe_idx = (u32)FIELD_GET(IRDMA_CQ_WQEIDX, qword3); info->qp_handle = (irdma_qp_handle)(unsigned long)qp; - info->op_type = (u8)FIELD_GET(IRDMA_CQ_SQ, qword3); + info->op_type = (u8)FIELD_GET(IRDMACQ_OP, qword3); if (info->q_type == IRDMA_CQE_QTYPE_RQ) { u32 array_idx; @@ -1527,6 +1527,9 @@ void irdma_uk_clean_cq(void *q, struct irdma_cq_uk *cq) if (polarity != temp) break; + /* Ensure CQE contents are read after valid bit is checked */ + dma_rmb(); + get_64bit_val(cqe, 8, &comp_ctx); if ((void *)(unsigned long)comp_ctx == q) set_64bit_val(cqe, 8, 0); diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index 71e1c5d347092..eb083f70b09ff 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -481,7 +481,7 @@ void irdma_free_cqp_request(struct irdma_cqp *cqp, if (cqp_request->dynamic) { kfree(cqp_request); } else { - cqp_request->request_done = false; + WRITE_ONCE(cqp_request->request_done, false); cqp_request->callback_fcn = NULL; cqp_request->waiting = false; @@ -515,7 +515,7 @@ irdma_free_pending_cqp_request(struct irdma_cqp *cqp, { if (cqp_request->waiting) { cqp_request->compl_info.error = true; - cqp_request->request_done = true; + WRITE_ONCE(cqp_request->request_done, true); wake_up(&cqp_request->waitq); } wait_event_timeout(cqp->remove_wq, @@ -567,11 +567,11 @@ static int irdma_wait_event(struct irdma_pci_f *rf, bool cqp_error = false; int err_code = 0; - cqp_timeout.compl_cqp_cmds = rf->sc_dev.cqp_cmd_stats[IRDMA_OP_CMPL_CMDS]; + cqp_timeout.compl_cqp_cmds = atomic64_read(&rf->sc_dev.cqp->completed_ops); do { irdma_cqp_ce_handler(rf, &rf->ccq.sc_cq); if (wait_event_timeout(cqp_request->waitq, - cqp_request->request_done, + READ_ONCE(cqp_request->request_done), msecs_to_jiffies(CQP_COMPL_WAIT_TIME_MS))) break; diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 456656617c33f..9d08aa99f3cb0 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -565,15 +565,15 @@ static int set_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_rss *rss_ctx, return (-EOPNOTSUPP); } - if (ucmd->rx_hash_fields_mask & ~(MLX4_IB_RX_HASH_SRC_IPV4 | - MLX4_IB_RX_HASH_DST_IPV4 | - MLX4_IB_RX_HASH_SRC_IPV6 | - MLX4_IB_RX_HASH_DST_IPV6 | - MLX4_IB_RX_HASH_SRC_PORT_TCP | - MLX4_IB_RX_HASH_DST_PORT_TCP | - MLX4_IB_RX_HASH_SRC_PORT_UDP | - MLX4_IB_RX_HASH_DST_PORT_UDP | - MLX4_IB_RX_HASH_INNER)) { + if (ucmd->rx_hash_fields_mask & ~(u64)(MLX4_IB_RX_HASH_SRC_IPV4 | + MLX4_IB_RX_HASH_DST_IPV4 | + MLX4_IB_RX_HASH_SRC_IPV6 | + MLX4_IB_RX_HASH_DST_IPV6 | + MLX4_IB_RX_HASH_SRC_PORT_TCP | + MLX4_IB_RX_HASH_DST_PORT_TCP | + MLX4_IB_RX_HASH_SRC_PORT_UDP | + MLX4_IB_RX_HASH_DST_PORT_UDP | + MLX4_IB_RX_HASH_INNER)) { pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", ucmd->rx_hash_fields_mask); return (-EOPNOTSUPP); diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c index 69bba0ef4a5df..53f43649f7d08 100644 --- a/drivers/infiniband/hw/mthca/mthca_qp.c +++ b/drivers/infiniband/hw/mthca/mthca_qp.c @@ -1393,7 +1393,7 @@ int mthca_alloc_sqp(struct mthca_dev *dev, if (mthca_array_get(&dev->qp_table.qp, mqpn)) err = -EBUSY; else - mthca_array_set(&dev->qp_table.qp, mqpn, qp->sqp); + mthca_array_set(&dev->qp_table.qp, mqpn, qp); spin_unlock_irq(&dev->qp_table.lock); if (err) diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c index 29d05663d4d17..ed2937a4e196f 100644 --- a/drivers/iommu/iommufd/device.c +++ b/drivers/iommu/iommufd/device.c @@ -109,10 +109,7 @@ EXPORT_SYMBOL_NS_GPL(iommufd_device_bind, IOMMUFD); */ void iommufd_device_unbind(struct iommufd_device *idev) { - bool was_destroyed; - - was_destroyed = iommufd_object_destroy_user(idev->ictx, &idev->obj); - WARN_ON(!was_destroyed); + iommufd_object_destroy_user(idev->ictx, &idev->obj); } EXPORT_SYMBOL_NS_GPL(iommufd_device_unbind, IOMMUFD); @@ -382,7 +379,7 @@ void iommufd_device_detach(struct iommufd_device *idev) mutex_unlock(&hwpt->devices_lock); if (hwpt->auto_domain) - iommufd_object_destroy_user(idev->ictx, &hwpt->obj); + iommufd_object_deref_user(idev->ictx, &hwpt->obj); else refcount_dec(&hwpt->obj.users); @@ -456,10 +453,7 @@ EXPORT_SYMBOL_NS_GPL(iommufd_access_create, IOMMUFD); */ void iommufd_access_destroy(struct iommufd_access *access) { - bool was_destroyed; - - was_destroyed = iommufd_object_destroy_user(access->ictx, &access->obj); - WARN_ON(!was_destroyed); + iommufd_object_destroy_user(access->ictx, &access->obj); } EXPORT_SYMBOL_NS_GPL(iommufd_access_destroy, IOMMUFD); diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index b38e67d1988bd..f9790983699ce 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -176,8 +176,19 @@ void iommufd_object_abort_and_destroy(struct iommufd_ctx *ictx, struct iommufd_object *obj); void iommufd_object_finalize(struct iommufd_ctx *ictx, struct iommufd_object *obj); -bool iommufd_object_destroy_user(struct iommufd_ctx *ictx, - struct iommufd_object *obj); +void __iommufd_object_destroy_user(struct iommufd_ctx *ictx, + struct iommufd_object *obj, bool allow_fail); +static inline void iommufd_object_destroy_user(struct iommufd_ctx *ictx, + struct iommufd_object *obj) +{ + __iommufd_object_destroy_user(ictx, obj, false); +} +static inline void iommufd_object_deref_user(struct iommufd_ctx *ictx, + struct iommufd_object *obj) +{ + __iommufd_object_destroy_user(ictx, obj, true); +} + struct iommufd_object *_iommufd_object_alloc(struct iommufd_ctx *ictx, size_t size, enum iommufd_object_type type); diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index 3fbe636c3d8a6..4cf5f73f27084 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -116,14 +116,56 @@ struct iommufd_object *iommufd_get_object(struct iommufd_ctx *ictx, u32 id, return obj; } +/* + * Remove the given object id from the xarray if the only reference to the + * object is held by the xarray. The caller must call ops destroy(). + */ +static struct iommufd_object *iommufd_object_remove(struct iommufd_ctx *ictx, + u32 id, bool extra_put) +{ + struct iommufd_object *obj; + XA_STATE(xas, &ictx->objects, id); + + xa_lock(&ictx->objects); + obj = xas_load(&xas); + if (xa_is_zero(obj) || !obj) { + obj = ERR_PTR(-ENOENT); + goto out_xa; + } + + /* + * If the caller is holding a ref on obj we put it here under the + * spinlock. + */ + if (extra_put) + refcount_dec(&obj->users); + + if (!refcount_dec_if_one(&obj->users)) { + obj = ERR_PTR(-EBUSY); + goto out_xa; + } + + xas_store(&xas, NULL); + if (ictx->vfio_ioas == container_of(obj, struct iommufd_ioas, obj)) + ictx->vfio_ioas = NULL; + +out_xa: + xa_unlock(&ictx->objects); + + /* The returned object reference count is zero */ + return obj; +} + /* * The caller holds a users refcount and wants to destroy the object. Returns * true if the object was destroyed. In all cases the caller no longer has a * reference on obj. */ -bool iommufd_object_destroy_user(struct iommufd_ctx *ictx, - struct iommufd_object *obj) +void __iommufd_object_destroy_user(struct iommufd_ctx *ictx, + struct iommufd_object *obj, bool allow_fail) { + struct iommufd_object *ret; + /* * The purpose of the destroy_rwsem is to ensure deterministic * destruction of objects used by external drivers and destroyed by this @@ -131,22 +173,22 @@ bool iommufd_object_destroy_user(struct iommufd_ctx *ictx, * side of this, such as during ioctl execution. */ down_write(&obj->destroy_rwsem); - xa_lock(&ictx->objects); - refcount_dec(&obj->users); - if (!refcount_dec_if_one(&obj->users)) { - xa_unlock(&ictx->objects); - up_write(&obj->destroy_rwsem); - return false; - } - __xa_erase(&ictx->objects, obj->id); - if (ictx->vfio_ioas && &ictx->vfio_ioas->obj == obj) - ictx->vfio_ioas = NULL; - xa_unlock(&ictx->objects); + ret = iommufd_object_remove(ictx, obj->id, true); up_write(&obj->destroy_rwsem); + if (allow_fail && IS_ERR(ret)) + return; + + /* + * If there is a bug and we couldn't destroy the object then we did put + * back the caller's refcount and will eventually try to free it again + * during close. + */ + if (WARN_ON(IS_ERR(ret))) + return; + iommufd_object_ops[obj->type].destroy(obj); kfree(obj); - return true; } static int iommufd_destroy(struct iommufd_ucmd *ucmd) @@ -154,13 +196,11 @@ static int iommufd_destroy(struct iommufd_ucmd *ucmd) struct iommu_destroy *cmd = ucmd->cmd; struct iommufd_object *obj; - obj = iommufd_get_object(ucmd->ictx, cmd->id, IOMMUFD_OBJ_ANY); + obj = iommufd_object_remove(ucmd->ictx, cmd->id, false); if (IS_ERR(obj)) return PTR_ERR(obj); - iommufd_ref_to_users(obj); - /* See iommufd_ref_to_users() */ - if (!iommufd_object_destroy_user(ucmd->ictx, obj)) - return -EBUSY; + iommufd_object_ops[obj->type].destroy(obj); + kfree(obj); return 0; } diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index 3c47846cc5efe..d80669c4caf4f 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -297,7 +297,7 @@ static void batch_clear_carry(struct pfn_batch *batch, unsigned int keep_pfns) batch->pfns[0] = batch->pfns[batch->end - 1] + (batch->npfns[batch->end - 1] - keep_pfns); batch->npfns[0] = keep_pfns; - batch->end = 0; + batch->end = 1; } static void batch_skip_carry(struct pfn_batch *batch, unsigned int skip_pfns) diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c index fa113cb2529a4..6341c0167c4ab 100644 --- a/drivers/irqchip/irq-bcm6345-l1.c +++ b/drivers/irqchip/irq-bcm6345-l1.c @@ -82,6 +82,7 @@ struct bcm6345_l1_chip { }; struct bcm6345_l1_cpu { + struct bcm6345_l1_chip *intc; void __iomem *map_base; unsigned int parent_irq; u32 enable_cache[]; @@ -115,17 +116,11 @@ static inline unsigned int cpu_for_irq(struct bcm6345_l1_chip *intc, static void bcm6345_l1_irq_handle(struct irq_desc *desc) { - struct bcm6345_l1_chip *intc = irq_desc_get_handler_data(desc); - struct bcm6345_l1_cpu *cpu; + struct bcm6345_l1_cpu *cpu = irq_desc_get_handler_data(desc); + struct bcm6345_l1_chip *intc = cpu->intc; struct irq_chip *chip = irq_desc_get_chip(desc); unsigned int idx; -#ifdef CONFIG_SMP - cpu = intc->cpus[cpu_logical_map(smp_processor_id())]; -#else - cpu = intc->cpus[0]; -#endif - chained_irq_enter(chip, desc); for (idx = 0; idx < intc->n_words; idx++) { @@ -253,6 +248,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn, if (!cpu) return -ENOMEM; + cpu->intc = intc; cpu->map_base = ioremap(res.start, sz); if (!cpu->map_base) return -ENOMEM; @@ -271,7 +267,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn, return -EINVAL; } irq_set_chained_handler_and_data(cpu->parent_irq, - bcm6345_l1_irq_handle, intc); + bcm6345_l1_irq_handle, cpu); return 0; } diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 0ec2b1e1df75b..c5cb2830e8537 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -273,13 +273,23 @@ static void vpe_to_cpuid_unlock(struct its_vpe *vpe, unsigned long flags) raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags); } +static struct irq_chip its_vpe_irq_chip; + static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags) { - struct its_vlpi_map *map = get_vlpi_map(d); + struct its_vpe *vpe = NULL; int cpu; - if (map) { - cpu = vpe_to_cpuid_lock(map->vpe, flags); + if (d->chip == &its_vpe_irq_chip) { + vpe = irq_data_get_irq_chip_data(d); + } else { + struct its_vlpi_map *map = get_vlpi_map(d); + if (map) + vpe = map->vpe; + } + + if (vpe) { + cpu = vpe_to_cpuid_lock(vpe, flags); } else { /* Physical LPIs are already locked via the irq_desc lock */ struct its_device *its_dev = irq_data_get_irq_chip_data(d); @@ -293,10 +303,18 @@ static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags) static void irq_to_cpuid_unlock(struct irq_data *d, unsigned long flags) { - struct its_vlpi_map *map = get_vlpi_map(d); + struct its_vpe *vpe = NULL; + + if (d->chip == &its_vpe_irq_chip) { + vpe = irq_data_get_irq_chip_data(d); + } else { + struct its_vlpi_map *map = get_vlpi_map(d); + if (map) + vpe = map->vpe; + } - if (map) - vpe_to_cpuid_unlock(map->vpe, flags); + if (vpe) + vpe_to_cpuid_unlock(vpe, flags); } static struct its_collection *valid_col(struct its_collection *col) @@ -1433,14 +1451,29 @@ static void wait_for_syncr(void __iomem *rdbase) cpu_relax(); } -static void direct_lpi_inv(struct irq_data *d) +static void __direct_lpi_inv(struct irq_data *d, u64 val) { - struct its_vlpi_map *map = get_vlpi_map(d); void __iomem *rdbase; unsigned long flags; - u64 val; int cpu; + /* Target the redistributor this LPI is currently routed to */ + cpu = irq_to_cpuid_lock(d, &flags); + raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock); + + rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base; + gic_write_lpir(val, rdbase + GICR_INVLPIR); + wait_for_syncr(rdbase); + + raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock); + irq_to_cpuid_unlock(d, flags); +} + +static void direct_lpi_inv(struct irq_data *d) +{ + struct its_vlpi_map *map = get_vlpi_map(d); + u64 val; + if (map) { struct its_device *its_dev = irq_data_get_irq_chip_data(d); @@ -1453,15 +1486,7 @@ static void direct_lpi_inv(struct irq_data *d) val = d->hwirq; } - /* Target the redistributor this LPI is currently routed to */ - cpu = irq_to_cpuid_lock(d, &flags); - raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock); - rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base; - gic_write_lpir(val, rdbase + GICR_INVLPIR); - - wait_for_syncr(rdbase); - raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock); - irq_to_cpuid_unlock(d, flags); + __direct_lpi_inv(d, val); } static void lpi_update_config(struct irq_data *d, u8 clr, u8 set) @@ -3952,18 +3977,10 @@ static void its_vpe_send_inv(struct irq_data *d) { struct its_vpe *vpe = irq_data_get_irq_chip_data(d); - if (gic_rdists->has_direct_lpi) { - void __iomem *rdbase; - - /* Target the redistributor this VPE is currently known on */ - raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock); - rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base; - gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR); - wait_for_syncr(rdbase); - raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock); - } else { + if (gic_rdists->has_direct_lpi) + __direct_lpi_inv(d, d->parent_data->hwirq); + else its_vpe_send_cmd(vpe, its_send_inv); - } } static void its_vpe_mask_irq(struct irq_data *d) diff --git a/drivers/md/dm-cache-policy-smq.c b/drivers/md/dm-cache-policy-smq.c index 493a8715dc8f8..8bd2ad743d9ae 100644 --- a/drivers/md/dm-cache-policy-smq.c +++ b/drivers/md/dm-cache-policy-smq.c @@ -857,7 +857,13 @@ struct smq_policy { struct background_tracker *bg_work; - bool migrations_allowed; + bool migrations_allowed:1; + + /* + * If this is set the policy will try and clean the whole cache + * even if the device is not idle. + */ + bool cleaner:1; }; /*----------------------------------------------------------------*/ @@ -1138,7 +1144,7 @@ static bool clean_target_met(struct smq_policy *mq, bool idle) * Cache entries may not be populated. So we cannot rely on the * size of the clean queue. */ - if (idle) { + if (idle || mq->cleaner) { /* * We'd like to clean everything. */ @@ -1722,11 +1728,9 @@ static void calc_hotspot_params(sector_t origin_size, *hotspot_block_size /= 2u; } -static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size, - sector_t origin_size, - sector_t cache_block_size, - bool mimic_mq, - bool migrations_allowed) +static struct dm_cache_policy * +__smq_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size, + bool mimic_mq, bool migrations_allowed, bool cleaner) { unsigned int i; unsigned int nr_sentinels_per_queue = 2u * NR_CACHE_LEVELS; @@ -1813,6 +1817,7 @@ static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size, goto bad_btracker; mq->migrations_allowed = migrations_allowed; + mq->cleaner = cleaner; return &mq->policy; @@ -1836,21 +1841,24 @@ static struct dm_cache_policy *smq_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size) { - return __smq_create(cache_size, origin_size, cache_block_size, false, true); + return __smq_create(cache_size, origin_size, cache_block_size, + false, true, false); } static struct dm_cache_policy *mq_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size) { - return __smq_create(cache_size, origin_size, cache_block_size, true, true); + return __smq_create(cache_size, origin_size, cache_block_size, + true, true, false); } static struct dm_cache_policy *cleaner_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size) { - return __smq_create(cache_size, origin_size, cache_block_size, false, false); + return __smq_create(cache_size, origin_size, cache_block_size, + false, false, true); } /*----------------------------------------------------------------*/ diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index c8821fcb82998..de3dd6e6bb892 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3251,8 +3251,7 @@ size_check: r = md_start(&rs->md); if (r) { ti->error = "Failed to start raid array"; - mddev_unlock(&rs->md); - goto bad_md_start; + goto bad_unlock; } /* If raid4/5/6 journal mode explicitly requested (only possible with journal dev) -> set it */ @@ -3260,8 +3259,7 @@ size_check: r = r5c_journal_mode_set(&rs->md, rs->journal_dev.mode); if (r) { ti->error = "Failed to set raid4/5/6 journal mode"; - mddev_unlock(&rs->md); - goto bad_journal_mode_set; + goto bad_unlock; } } @@ -3272,14 +3270,14 @@ size_check: if (rs_is_raid456(rs)) { r = rs_set_raid456_stripe_cache(rs); if (r) - goto bad_stripe_cache; + goto bad_unlock; } /* Now do an early reshape check */ if (test_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags)) { r = rs_check_reshape(rs); if (r) - goto bad_check_reshape; + goto bad_unlock; /* Restore new, ctr requested layout to perform check */ rs_config_restore(rs, &rs_layout); @@ -3288,7 +3286,7 @@ size_check: r = rs->md.pers->check_reshape(&rs->md); if (r) { ti->error = "Reshape check failed"; - goto bad_check_reshape; + goto bad_unlock; } } } @@ -3299,11 +3297,9 @@ size_check: mddev_unlock(&rs->md); return 0; -bad_md_start: -bad_journal_mode_set: -bad_stripe_cache: -bad_check_reshape: +bad_unlock: md_stop(&rs->md); + mddev_unlock(&rs->md); bad: raid_set_free(rs); @@ -3314,7 +3310,9 @@ static void raid_dtr(struct dm_target *ti) { struct raid_set *rs = ti->private; + mddev_lock_nointr(&rs->md); md_stop(&rs->md); + mddev_unlock(&rs->md); raid_set_free(rs); } diff --git a/drivers/md/md.c b/drivers/md/md.c index 18384251399ab..32d7ba8069aef 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6260,6 +6260,8 @@ static void __md_stop(struct mddev *mddev) void md_stop(struct mddev *mddev) { + lockdep_assert_held(&mddev->reconfig_mutex); + /* stop the array and free an attached data structures. * This is called from dm-raid */ diff --git a/drivers/media/i2c/tc358746.c b/drivers/media/i2c/tc358746.c index ec1a193ba161a..25fbce5cabdaa 100644 --- a/drivers/media/i2c/tc358746.c +++ b/drivers/media/i2c/tc358746.c @@ -813,8 +813,8 @@ static unsigned long tc358746_find_pll_settings(struct tc358746 *tc358746, u32 min_delta = 0xffffffff; u16 prediv_max = 17; u16 prediv_min = 1; - u16 m_best, mul; - u16 p_best, p; + u16 m_best = 0, mul; + u16 p_best = 1, p; u8 postdiv; if (fout > 1000 * HZ_PER_MHZ) { diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c index de23627a119a0..82bf8b3be66a2 100644 --- a/drivers/media/platform/amphion/vpu_core.c +++ b/drivers/media/platform/amphion/vpu_core.c @@ -826,7 +826,7 @@ static const struct dev_pm_ops vpu_core_pm_ops = { static struct vpu_core_resources imx8q_enc = { .type = VPU_CORE_TYPE_ENC, - .fwname = "vpu/vpu_fw_imx8_enc.bin", + .fwname = "amphion/vpu/vpu_fw_imx8_enc.bin", .stride = 16, .max_width = 1920, .max_height = 1920, @@ -841,7 +841,7 @@ static struct vpu_core_resources imx8q_enc = { static struct vpu_core_resources imx8q_dec = { .type = VPU_CORE_TYPE_DEC, - .fwname = "vpu/vpu_fw_imx8_dec.bin", + .fwname = "amphion/vpu/vpu_fw_imx8_dec.bin", .stride = 256, .max_width = 8188, .max_height = 8188, diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c index 0051f372a66cf..40cb3cb87ba17 100644 --- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c +++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c @@ -936,148 +936,6 @@ static int mtk_jpeg_set_dec_dst(struct mtk_jpeg_ctx *ctx, return 0; } -static int mtk_jpegenc_get_hw(struct mtk_jpeg_ctx *ctx) -{ - struct mtk_jpegenc_comp_dev *comp_jpeg; - struct mtk_jpeg_dev *jpeg = ctx->jpeg; - unsigned long flags; - int hw_id = -1; - int i; - - spin_lock_irqsave(&jpeg->hw_lock, flags); - for (i = 0; i < MTK_JPEGENC_HW_MAX; i++) { - comp_jpeg = jpeg->enc_hw_dev[i]; - if (comp_jpeg->hw_state == MTK_JPEG_HW_IDLE) { - hw_id = i; - comp_jpeg->hw_state = MTK_JPEG_HW_BUSY; - break; - } - } - spin_unlock_irqrestore(&jpeg->hw_lock, flags); - - return hw_id; -} - -static int mtk_jpegenc_set_hw_param(struct mtk_jpeg_ctx *ctx, - int hw_id, - struct vb2_v4l2_buffer *src_buf, - struct vb2_v4l2_buffer *dst_buf) -{ - struct mtk_jpegenc_comp_dev *jpeg = ctx->jpeg->enc_hw_dev[hw_id]; - - jpeg->hw_param.curr_ctx = ctx; - jpeg->hw_param.src_buffer = src_buf; - jpeg->hw_param.dst_buffer = dst_buf; - - return 0; -} - -static int mtk_jpegenc_put_hw(struct mtk_jpeg_dev *jpeg, int hw_id) -{ - unsigned long flags; - - spin_lock_irqsave(&jpeg->hw_lock, flags); - jpeg->enc_hw_dev[hw_id]->hw_state = MTK_JPEG_HW_IDLE; - spin_unlock_irqrestore(&jpeg->hw_lock, flags); - - return 0; -} - -static void mtk_jpegenc_worker(struct work_struct *work) -{ - struct mtk_jpegenc_comp_dev *comp_jpeg[MTK_JPEGENC_HW_MAX]; - enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; - struct mtk_jpeg_src_buf *jpeg_dst_buf; - struct vb2_v4l2_buffer *src_buf, *dst_buf; - int ret, i, hw_id = 0; - unsigned long flags; - - struct mtk_jpeg_ctx *ctx = container_of(work, - struct mtk_jpeg_ctx, - jpeg_work); - struct mtk_jpeg_dev *jpeg = ctx->jpeg; - - for (i = 0; i < MTK_JPEGENC_HW_MAX; i++) - comp_jpeg[i] = jpeg->enc_hw_dev[i]; - i = 0; - -retry_select: - hw_id = mtk_jpegenc_get_hw(ctx); - if (hw_id < 0) { - ret = wait_event_interruptible(jpeg->hw_wq, - atomic_read(&jpeg->hw_rdy) > 0); - if (ret != 0 || (i++ > MTK_JPEG_MAX_RETRY_TIME)) { - dev_err(jpeg->dev, "%s : %d, all HW are busy\n", - __func__, __LINE__); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - return; - } - - goto retry_select; - } - - atomic_dec(&jpeg->hw_rdy); - src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); - if (!src_buf) - goto getbuf_fail; - - dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); - if (!dst_buf) - goto getbuf_fail; - - v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true); - - mtk_jpegenc_set_hw_param(ctx, hw_id, src_buf, dst_buf); - ret = pm_runtime_get_sync(comp_jpeg[hw_id]->dev); - if (ret < 0) { - dev_err(jpeg->dev, "%s : %d, pm_runtime_get_sync fail !!!\n", - __func__, __LINE__); - goto enc_end; - } - - ret = clk_prepare_enable(comp_jpeg[hw_id]->venc_clk.clks->clk); - if (ret) { - dev_err(jpeg->dev, "%s : %d, jpegenc clk_prepare_enable fail\n", - __func__, __LINE__); - goto enc_end; - } - - v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - - schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work, - msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC)); - - spin_lock_irqsave(&comp_jpeg[hw_id]->hw_lock, flags); - jpeg_dst_buf = mtk_jpeg_vb2_to_srcbuf(&dst_buf->vb2_buf); - jpeg_dst_buf->curr_ctx = ctx; - jpeg_dst_buf->frame_num = ctx->total_frame_num; - ctx->total_frame_num++; - mtk_jpeg_enc_reset(comp_jpeg[hw_id]->reg_base); - mtk_jpeg_set_enc_dst(ctx, - comp_jpeg[hw_id]->reg_base, - &dst_buf->vb2_buf); - mtk_jpeg_set_enc_src(ctx, - comp_jpeg[hw_id]->reg_base, - &src_buf->vb2_buf); - mtk_jpeg_set_enc_params(ctx, comp_jpeg[hw_id]->reg_base); - mtk_jpeg_enc_start(comp_jpeg[hw_id]->reg_base); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - spin_unlock_irqrestore(&comp_jpeg[hw_id]->hw_lock, flags); - - return; - -enc_end: - v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_buf_done(src_buf, buf_state); - v4l2_m2m_buf_done(dst_buf, buf_state); -getbuf_fail: - atomic_inc(&jpeg->hw_rdy); - mtk_jpegenc_put_hw(jpeg, hw_id); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); -} - static void mtk_jpeg_enc_device_run(void *priv) { struct mtk_jpeg_ctx *ctx = priv; @@ -1128,206 +986,39 @@ static void mtk_jpeg_multicore_enc_device_run(void *priv) queue_work(jpeg->workqueue, &ctx->jpeg_work); } -static int mtk_jpegdec_get_hw(struct mtk_jpeg_ctx *ctx) +static void mtk_jpeg_multicore_dec_device_run(void *priv) { - struct mtk_jpegdec_comp_dev *comp_jpeg; + struct mtk_jpeg_ctx *ctx = priv; struct mtk_jpeg_dev *jpeg = ctx->jpeg; - unsigned long flags; - int hw_id = -1; - int i; - - spin_lock_irqsave(&jpeg->hw_lock, flags); - for (i = 0; i < MTK_JPEGDEC_HW_MAX; i++) { - comp_jpeg = jpeg->dec_hw_dev[i]; - if (comp_jpeg->hw_state == MTK_JPEG_HW_IDLE) { - hw_id = i; - comp_jpeg->hw_state = MTK_JPEG_HW_BUSY; - break; - } - } - spin_unlock_irqrestore(&jpeg->hw_lock, flags); - return hw_id; -} - -static int mtk_jpegdec_put_hw(struct mtk_jpeg_dev *jpeg, int hw_id) -{ - unsigned long flags; - - spin_lock_irqsave(&jpeg->hw_lock, flags); - jpeg->dec_hw_dev[hw_id]->hw_state = - MTK_JPEG_HW_IDLE; - spin_unlock_irqrestore(&jpeg->hw_lock, flags); - - return 0; -} - -static int mtk_jpegdec_set_hw_param(struct mtk_jpeg_ctx *ctx, - int hw_id, - struct vb2_v4l2_buffer *src_buf, - struct vb2_v4l2_buffer *dst_buf) -{ - struct mtk_jpegdec_comp_dev *jpeg = - ctx->jpeg->dec_hw_dev[hw_id]; - - jpeg->hw_param.curr_ctx = ctx; - jpeg->hw_param.src_buffer = src_buf; - jpeg->hw_param.dst_buffer = dst_buf; - - return 0; + queue_work(jpeg->workqueue, &ctx->jpeg_work); } -static void mtk_jpegdec_worker(struct work_struct *work) +static void mtk_jpeg_dec_device_run(void *priv) { - struct mtk_jpeg_ctx *ctx = container_of(work, struct mtk_jpeg_ctx, - jpeg_work); - struct mtk_jpegdec_comp_dev *comp_jpeg[MTK_JPEGDEC_HW_MAX]; - enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; - struct mtk_jpeg_src_buf *jpeg_src_buf, *jpeg_dst_buf; - struct vb2_v4l2_buffer *src_buf, *dst_buf; + struct mtk_jpeg_ctx *ctx = priv; struct mtk_jpeg_dev *jpeg = ctx->jpeg; - int ret, i, hw_id = 0; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; + unsigned long flags; + struct mtk_jpeg_src_buf *jpeg_src_buf; struct mtk_jpeg_bs bs; struct mtk_jpeg_fb fb; - unsigned long flags; - - for (i = 0; i < MTK_JPEGDEC_HW_MAX; i++) - comp_jpeg[i] = jpeg->dec_hw_dev[i]; - i = 0; - -retry_select: - hw_id = mtk_jpegdec_get_hw(ctx); - if (hw_id < 0) { - ret = wait_event_interruptible_timeout(jpeg->hw_wq, - atomic_read(&jpeg->hw_rdy) > 0, - MTK_JPEG_HW_TIMEOUT_MSEC); - if (ret != 0 || (i++ > MTK_JPEG_MAX_RETRY_TIME)) { - dev_err(jpeg->dev, "%s : %d, all HW are busy\n", - __func__, __LINE__); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - return; - } - - goto retry_select; - } + int ret; - atomic_dec(&jpeg->hw_rdy); src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); - if (!src_buf) - goto getbuf_fail; - dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); - if (!dst_buf) - goto getbuf_fail; - - v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true); jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf); - jpeg_dst_buf = mtk_jpeg_vb2_to_srcbuf(&dst_buf->vb2_buf); - if (mtk_jpeg_check_resolution_change(ctx, - &jpeg_src_buf->dec_param)) { + if (mtk_jpeg_check_resolution_change(ctx, &jpeg_src_buf->dec_param)) { mtk_jpeg_queue_src_chg_event(ctx); ctx->state = MTK_JPEG_SOURCE_CHANGE; - goto getbuf_fail; + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + return; } - jpeg_src_buf->curr_ctx = ctx; - jpeg_src_buf->frame_num = ctx->total_frame_num; - jpeg_dst_buf->curr_ctx = ctx; - jpeg_dst_buf->frame_num = ctx->total_frame_num; - - mtk_jpegdec_set_hw_param(ctx, hw_id, src_buf, dst_buf); - ret = pm_runtime_get_sync(comp_jpeg[hw_id]->dev); - if (ret < 0) { - dev_err(jpeg->dev, "%s : %d, pm_runtime_get_sync fail !!!\n", - __func__, __LINE__); - goto dec_end; - } - - ret = clk_prepare_enable(comp_jpeg[hw_id]->jdec_clk.clks->clk); - if (ret) { - dev_err(jpeg->dev, "%s : %d, jpegdec clk_prepare_enable fail\n", - __func__, __LINE__); - goto clk_end; - } - - v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - - schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work, - msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC)); - - mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs); - if (mtk_jpeg_set_dec_dst(ctx, - &jpeg_src_buf->dec_param, - &dst_buf->vb2_buf, &fb)) { - dev_err(jpeg->dev, "%s : %d, mtk_jpeg_set_dec_dst fail\n", - __func__, __LINE__); - goto setdst_end; - } - - spin_lock_irqsave(&comp_jpeg[hw_id]->hw_lock, flags); - ctx->total_frame_num++; - mtk_jpeg_dec_reset(comp_jpeg[hw_id]->reg_base); - mtk_jpeg_dec_set_config(comp_jpeg[hw_id]->reg_base, - &jpeg_src_buf->dec_param, - jpeg_src_buf->bs_size, - &bs, - &fb); - mtk_jpeg_dec_start(comp_jpeg[hw_id]->reg_base); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - spin_unlock_irqrestore(&comp_jpeg[hw_id]->hw_lock, flags); - - return; - -setdst_end: - clk_disable_unprepare(comp_jpeg[hw_id]->jdec_clk.clks->clk); -clk_end: - pm_runtime_put(comp_jpeg[hw_id]->dev); -dec_end: - v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - v4l2_m2m_buf_done(src_buf, buf_state); - v4l2_m2m_buf_done(dst_buf, buf_state); -getbuf_fail: - atomic_inc(&jpeg->hw_rdy); - mtk_jpegdec_put_hw(jpeg, hw_id); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); -} - -static void mtk_jpeg_multicore_dec_device_run(void *priv) -{ - struct mtk_jpeg_ctx *ctx = priv; - struct mtk_jpeg_dev *jpeg = ctx->jpeg; - - queue_work(jpeg->workqueue, &ctx->jpeg_work); -} - -static void mtk_jpeg_dec_device_run(void *priv) -{ - struct mtk_jpeg_ctx *ctx = priv; - struct mtk_jpeg_dev *jpeg = ctx->jpeg; - struct vb2_v4l2_buffer *src_buf, *dst_buf; - enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; - unsigned long flags; - struct mtk_jpeg_src_buf *jpeg_src_buf; - struct mtk_jpeg_bs bs; - struct mtk_jpeg_fb fb; - int ret; - - src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); - dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); - jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf); - - if (mtk_jpeg_check_resolution_change(ctx, &jpeg_src_buf->dec_param)) { - mtk_jpeg_queue_src_chg_event(ctx); - ctx->state = MTK_JPEG_SOURCE_CHANGE; - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - return; - } - - ret = pm_runtime_resume_and_get(jpeg->dev); - if (ret < 0) + ret = pm_runtime_resume_and_get(jpeg->dev); + if (ret < 0) goto dec_end; schedule_delayed_work(&jpeg->job_timeout_work, @@ -1430,101 +1121,6 @@ static void mtk_jpeg_clk_off(struct mtk_jpeg_dev *jpeg) jpeg->variant->clks); } -static irqreturn_t mtk_jpeg_enc_done(struct mtk_jpeg_dev *jpeg) -{ - struct mtk_jpeg_ctx *ctx; - struct vb2_v4l2_buffer *src_buf, *dst_buf; - enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; - u32 result_size; - - ctx = v4l2_m2m_get_curr_priv(jpeg->m2m_dev); - if (!ctx) { - v4l2_err(&jpeg->v4l2_dev, "Context is NULL\n"); - return IRQ_HANDLED; - } - - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - - result_size = mtk_jpeg_enc_get_file_size(jpeg->reg_base); - vb2_set_plane_payload(&dst_buf->vb2_buf, 0, result_size); - - buf_state = VB2_BUF_STATE_DONE; - - v4l2_m2m_buf_done(src_buf, buf_state); - v4l2_m2m_buf_done(dst_buf, buf_state); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - pm_runtime_put(ctx->jpeg->dev); - return IRQ_HANDLED; -} - -static irqreturn_t mtk_jpeg_enc_irq(int irq, void *priv) -{ - struct mtk_jpeg_dev *jpeg = priv; - u32 irq_status; - irqreturn_t ret = IRQ_NONE; - - cancel_delayed_work(&jpeg->job_timeout_work); - - irq_status = readl(jpeg->reg_base + JPEG_ENC_INT_STS) & - JPEG_ENC_INT_STATUS_MASK_ALLIRQ; - if (irq_status) - writel(0, jpeg->reg_base + JPEG_ENC_INT_STS); - - if (!(irq_status & JPEG_ENC_INT_STATUS_DONE)) - return ret; - - ret = mtk_jpeg_enc_done(jpeg); - return ret; -} - -static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv) -{ - struct mtk_jpeg_dev *jpeg = priv; - struct mtk_jpeg_ctx *ctx; - struct vb2_v4l2_buffer *src_buf, *dst_buf; - struct mtk_jpeg_src_buf *jpeg_src_buf; - enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; - u32 dec_irq_ret; - u32 dec_ret; - int i; - - cancel_delayed_work(&jpeg->job_timeout_work); - - dec_ret = mtk_jpeg_dec_get_int_status(jpeg->reg_base); - dec_irq_ret = mtk_jpeg_dec_enum_result(dec_ret); - ctx = v4l2_m2m_get_curr_priv(jpeg->m2m_dev); - if (!ctx) { - v4l2_err(&jpeg->v4l2_dev, "Context is NULL\n"); - return IRQ_HANDLED; - } - - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); - dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); - jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf); - - if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW) - mtk_jpeg_dec_reset(jpeg->reg_base); - - if (dec_irq_ret != MTK_JPEG_DEC_RESULT_EOF_DONE) { - dev_err(jpeg->dev, "decode failed\n"); - goto dec_end; - } - - for (i = 0; i < dst_buf->vb2_buf.num_planes; i++) - vb2_set_plane_payload(&dst_buf->vb2_buf, i, - jpeg_src_buf->dec_param.comp_size[i]); - - buf_state = VB2_BUF_STATE_DONE; - -dec_end: - v4l2_m2m_buf_done(src_buf, buf_state); - v4l2_m2m_buf_done(dst_buf, buf_state); - v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); - pm_runtime_put(ctx->jpeg->dev); - return IRQ_HANDLED; -} - static void mtk_jpeg_set_default_params(struct mtk_jpeg_ctx *ctx) { struct mtk_jpeg_q_data *q = &ctx->out_q; @@ -1637,15 +1233,6 @@ static const struct v4l2_file_operations mtk_jpeg_fops = { .mmap = v4l2_m2m_fop_mmap, }; -static struct clk_bulk_data mt8173_jpeg_dec_clocks[] = { - { .id = "jpgdec-smi" }, - { .id = "jpgdec" }, -}; - -static struct clk_bulk_data mtk_jpeg_clocks[] = { - { .id = "jpgenc" }, -}; - static void mtk_jpeg_job_timeout_work(struct work_struct *work) { struct mtk_jpeg_dev *jpeg = container_of(work, struct mtk_jpeg_dev, @@ -1866,7 +1453,419 @@ static const struct dev_pm_ops mtk_jpeg_pm_ops = { SET_RUNTIME_PM_OPS(mtk_jpeg_pm_suspend, mtk_jpeg_pm_resume, NULL) }; -#if defined(CONFIG_OF) +static int mtk_jpegenc_get_hw(struct mtk_jpeg_ctx *ctx) +{ + struct mtk_jpegenc_comp_dev *comp_jpeg; + struct mtk_jpeg_dev *jpeg = ctx->jpeg; + unsigned long flags; + int hw_id = -1; + int i; + + spin_lock_irqsave(&jpeg->hw_lock, flags); + for (i = 0; i < MTK_JPEGENC_HW_MAX; i++) { + comp_jpeg = jpeg->enc_hw_dev[i]; + if (comp_jpeg->hw_state == MTK_JPEG_HW_IDLE) { + hw_id = i; + comp_jpeg->hw_state = MTK_JPEG_HW_BUSY; + break; + } + } + spin_unlock_irqrestore(&jpeg->hw_lock, flags); + + return hw_id; +} + +static int mtk_jpegenc_set_hw_param(struct mtk_jpeg_ctx *ctx, + int hw_id, + struct vb2_v4l2_buffer *src_buf, + struct vb2_v4l2_buffer *dst_buf) +{ + struct mtk_jpegenc_comp_dev *jpeg = ctx->jpeg->enc_hw_dev[hw_id]; + + jpeg->hw_param.curr_ctx = ctx; + jpeg->hw_param.src_buffer = src_buf; + jpeg->hw_param.dst_buffer = dst_buf; + + return 0; +} + +static int mtk_jpegenc_put_hw(struct mtk_jpeg_dev *jpeg, int hw_id) +{ + unsigned long flags; + + spin_lock_irqsave(&jpeg->hw_lock, flags); + jpeg->enc_hw_dev[hw_id]->hw_state = MTK_JPEG_HW_IDLE; + spin_unlock_irqrestore(&jpeg->hw_lock, flags); + + return 0; +} + +static int mtk_jpegdec_get_hw(struct mtk_jpeg_ctx *ctx) +{ + struct mtk_jpegdec_comp_dev *comp_jpeg; + struct mtk_jpeg_dev *jpeg = ctx->jpeg; + unsigned long flags; + int hw_id = -1; + int i; + + spin_lock_irqsave(&jpeg->hw_lock, flags); + for (i = 0; i < MTK_JPEGDEC_HW_MAX; i++) { + comp_jpeg = jpeg->dec_hw_dev[i]; + if (comp_jpeg->hw_state == MTK_JPEG_HW_IDLE) { + hw_id = i; + comp_jpeg->hw_state = MTK_JPEG_HW_BUSY; + break; + } + } + spin_unlock_irqrestore(&jpeg->hw_lock, flags); + + return hw_id; +} + +static int mtk_jpegdec_put_hw(struct mtk_jpeg_dev *jpeg, int hw_id) +{ + unsigned long flags; + + spin_lock_irqsave(&jpeg->hw_lock, flags); + jpeg->dec_hw_dev[hw_id]->hw_state = + MTK_JPEG_HW_IDLE; + spin_unlock_irqrestore(&jpeg->hw_lock, flags); + + return 0; +} + +static int mtk_jpegdec_set_hw_param(struct mtk_jpeg_ctx *ctx, + int hw_id, + struct vb2_v4l2_buffer *src_buf, + struct vb2_v4l2_buffer *dst_buf) +{ + struct mtk_jpegdec_comp_dev *jpeg = + ctx->jpeg->dec_hw_dev[hw_id]; + + jpeg->hw_param.curr_ctx = ctx; + jpeg->hw_param.src_buffer = src_buf; + jpeg->hw_param.dst_buffer = dst_buf; + + return 0; +} + +static irqreturn_t mtk_jpeg_enc_done(struct mtk_jpeg_dev *jpeg) +{ + struct mtk_jpeg_ctx *ctx; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; + u32 result_size; + + ctx = v4l2_m2m_get_curr_priv(jpeg->m2m_dev); + if (!ctx) { + v4l2_err(&jpeg->v4l2_dev, "Context is NULL\n"); + return IRQ_HANDLED; + } + + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + result_size = mtk_jpeg_enc_get_file_size(jpeg->reg_base); + vb2_set_plane_payload(&dst_buf->vb2_buf, 0, result_size); + + buf_state = VB2_BUF_STATE_DONE; + + v4l2_m2m_buf_done(src_buf, buf_state); + v4l2_m2m_buf_done(dst_buf, buf_state); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + pm_runtime_put(ctx->jpeg->dev); + return IRQ_HANDLED; +} + +static void mtk_jpegenc_worker(struct work_struct *work) +{ + struct mtk_jpegenc_comp_dev *comp_jpeg[MTK_JPEGENC_HW_MAX]; + enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; + struct mtk_jpeg_src_buf *jpeg_dst_buf; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + int ret, i, hw_id = 0; + unsigned long flags; + + struct mtk_jpeg_ctx *ctx = container_of(work, + struct mtk_jpeg_ctx, + jpeg_work); + struct mtk_jpeg_dev *jpeg = ctx->jpeg; + + for (i = 0; i < MTK_JPEGENC_HW_MAX; i++) + comp_jpeg[i] = jpeg->enc_hw_dev[i]; + i = 0; + +retry_select: + hw_id = mtk_jpegenc_get_hw(ctx); + if (hw_id < 0) { + ret = wait_event_interruptible(jpeg->hw_wq, + atomic_read(&jpeg->hw_rdy) > 0); + if (ret != 0 || (i++ > MTK_JPEG_MAX_RETRY_TIME)) { + dev_err(jpeg->dev, "%s : %d, all HW are busy\n", + __func__, __LINE__); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + return; + } + + goto retry_select; + } + + atomic_dec(&jpeg->hw_rdy); + src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); + if (!src_buf) + goto getbuf_fail; + + dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); + if (!dst_buf) + goto getbuf_fail; + + v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true); + + mtk_jpegenc_set_hw_param(ctx, hw_id, src_buf, dst_buf); + ret = pm_runtime_get_sync(comp_jpeg[hw_id]->dev); + if (ret < 0) { + dev_err(jpeg->dev, "%s : %d, pm_runtime_get_sync fail !!!\n", + __func__, __LINE__); + goto enc_end; + } + + ret = clk_prepare_enable(comp_jpeg[hw_id]->venc_clk.clks->clk); + if (ret) { + dev_err(jpeg->dev, "%s : %d, jpegenc clk_prepare_enable fail\n", + __func__, __LINE__); + goto enc_end; + } + + v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work, + msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC)); + + spin_lock_irqsave(&comp_jpeg[hw_id]->hw_lock, flags); + jpeg_dst_buf = mtk_jpeg_vb2_to_srcbuf(&dst_buf->vb2_buf); + jpeg_dst_buf->curr_ctx = ctx; + jpeg_dst_buf->frame_num = ctx->total_frame_num; + ctx->total_frame_num++; + mtk_jpeg_enc_reset(comp_jpeg[hw_id]->reg_base); + mtk_jpeg_set_enc_dst(ctx, + comp_jpeg[hw_id]->reg_base, + &dst_buf->vb2_buf); + mtk_jpeg_set_enc_src(ctx, + comp_jpeg[hw_id]->reg_base, + &src_buf->vb2_buf); + mtk_jpeg_set_enc_params(ctx, comp_jpeg[hw_id]->reg_base); + mtk_jpeg_enc_start(comp_jpeg[hw_id]->reg_base); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + spin_unlock_irqrestore(&comp_jpeg[hw_id]->hw_lock, flags); + + return; + +enc_end: + v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_buf_done(src_buf, buf_state); + v4l2_m2m_buf_done(dst_buf, buf_state); +getbuf_fail: + atomic_inc(&jpeg->hw_rdy); + mtk_jpegenc_put_hw(jpeg, hw_id); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); +} + +static void mtk_jpegdec_worker(struct work_struct *work) +{ + struct mtk_jpeg_ctx *ctx = container_of(work, struct mtk_jpeg_ctx, + jpeg_work); + struct mtk_jpegdec_comp_dev *comp_jpeg[MTK_JPEGDEC_HW_MAX]; + enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; + struct mtk_jpeg_src_buf *jpeg_src_buf, *jpeg_dst_buf; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + struct mtk_jpeg_dev *jpeg = ctx->jpeg; + int ret, i, hw_id = 0; + struct mtk_jpeg_bs bs; + struct mtk_jpeg_fb fb; + unsigned long flags; + + for (i = 0; i < MTK_JPEGDEC_HW_MAX; i++) + comp_jpeg[i] = jpeg->dec_hw_dev[i]; + i = 0; + +retry_select: + hw_id = mtk_jpegdec_get_hw(ctx); + if (hw_id < 0) { + ret = wait_event_interruptible_timeout(jpeg->hw_wq, + atomic_read(&jpeg->hw_rdy) > 0, + MTK_JPEG_HW_TIMEOUT_MSEC); + if (ret != 0 || (i++ > MTK_JPEG_MAX_RETRY_TIME)) { + dev_err(jpeg->dev, "%s : %d, all HW are busy\n", + __func__, __LINE__); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + return; + } + + goto retry_select; + } + + atomic_dec(&jpeg->hw_rdy); + src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); + if (!src_buf) + goto getbuf_fail; + + dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); + if (!dst_buf) + goto getbuf_fail; + + v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true); + jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf); + jpeg_dst_buf = mtk_jpeg_vb2_to_srcbuf(&dst_buf->vb2_buf); + + if (mtk_jpeg_check_resolution_change(ctx, + &jpeg_src_buf->dec_param)) { + mtk_jpeg_queue_src_chg_event(ctx); + ctx->state = MTK_JPEG_SOURCE_CHANGE; + goto getbuf_fail; + } + + jpeg_src_buf->curr_ctx = ctx; + jpeg_src_buf->frame_num = ctx->total_frame_num; + jpeg_dst_buf->curr_ctx = ctx; + jpeg_dst_buf->frame_num = ctx->total_frame_num; + + mtk_jpegdec_set_hw_param(ctx, hw_id, src_buf, dst_buf); + ret = pm_runtime_get_sync(comp_jpeg[hw_id]->dev); + if (ret < 0) { + dev_err(jpeg->dev, "%s : %d, pm_runtime_get_sync fail !!!\n", + __func__, __LINE__); + goto dec_end; + } + + ret = clk_prepare_enable(comp_jpeg[hw_id]->jdec_clk.clks->clk); + if (ret) { + dev_err(jpeg->dev, "%s : %d, jpegdec clk_prepare_enable fail\n", + __func__, __LINE__); + goto clk_end; + } + + v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work, + msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC)); + + mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs); + if (mtk_jpeg_set_dec_dst(ctx, + &jpeg_src_buf->dec_param, + &dst_buf->vb2_buf, &fb)) { + dev_err(jpeg->dev, "%s : %d, mtk_jpeg_set_dec_dst fail\n", + __func__, __LINE__); + goto setdst_end; + } + + spin_lock_irqsave(&comp_jpeg[hw_id]->hw_lock, flags); + ctx->total_frame_num++; + mtk_jpeg_dec_reset(comp_jpeg[hw_id]->reg_base); + mtk_jpeg_dec_set_config(comp_jpeg[hw_id]->reg_base, + &jpeg_src_buf->dec_param, + jpeg_src_buf->bs_size, + &bs, + &fb); + mtk_jpeg_dec_start(comp_jpeg[hw_id]->reg_base); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + spin_unlock_irqrestore(&comp_jpeg[hw_id]->hw_lock, flags); + + return; + +setdst_end: + clk_disable_unprepare(comp_jpeg[hw_id]->jdec_clk.clks->clk); +clk_end: + pm_runtime_put(comp_jpeg[hw_id]->dev); +dec_end: + v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + v4l2_m2m_buf_done(src_buf, buf_state); + v4l2_m2m_buf_done(dst_buf, buf_state); +getbuf_fail: + atomic_inc(&jpeg->hw_rdy); + mtk_jpegdec_put_hw(jpeg, hw_id); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); +} + +static irqreturn_t mtk_jpeg_enc_irq(int irq, void *priv) +{ + struct mtk_jpeg_dev *jpeg = priv; + u32 irq_status; + irqreturn_t ret = IRQ_NONE; + + cancel_delayed_work(&jpeg->job_timeout_work); + + irq_status = readl(jpeg->reg_base + JPEG_ENC_INT_STS) & + JPEG_ENC_INT_STATUS_MASK_ALLIRQ; + if (irq_status) + writel(0, jpeg->reg_base + JPEG_ENC_INT_STS); + + if (!(irq_status & JPEG_ENC_INT_STATUS_DONE)) + return ret; + + ret = mtk_jpeg_enc_done(jpeg); + return ret; +} + +static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv) +{ + struct mtk_jpeg_dev *jpeg = priv; + struct mtk_jpeg_ctx *ctx; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + struct mtk_jpeg_src_buf *jpeg_src_buf; + enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR; + u32 dec_irq_ret; + u32 dec_ret; + int i; + + cancel_delayed_work(&jpeg->job_timeout_work); + + dec_ret = mtk_jpeg_dec_get_int_status(jpeg->reg_base); + dec_irq_ret = mtk_jpeg_dec_enum_result(dec_ret); + ctx = v4l2_m2m_get_curr_priv(jpeg->m2m_dev); + if (!ctx) { + v4l2_err(&jpeg->v4l2_dev, "Context is NULL\n"); + return IRQ_HANDLED; + } + + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf); + + if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW) + mtk_jpeg_dec_reset(jpeg->reg_base); + + if (dec_irq_ret != MTK_JPEG_DEC_RESULT_EOF_DONE) { + dev_err(jpeg->dev, "decode failed\n"); + goto dec_end; + } + + for (i = 0; i < dst_buf->vb2_buf.num_planes; i++) + vb2_set_plane_payload(&dst_buf->vb2_buf, i, + jpeg_src_buf->dec_param.comp_size[i]); + + buf_state = VB2_BUF_STATE_DONE; + +dec_end: + v4l2_m2m_buf_done(src_buf, buf_state); + v4l2_m2m_buf_done(dst_buf, buf_state); + v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx); + pm_runtime_put(ctx->jpeg->dev); + return IRQ_HANDLED; +} + +static struct clk_bulk_data mtk_jpeg_clocks[] = { + { .id = "jpgenc" }, +}; + +static struct clk_bulk_data mt8173_jpeg_dec_clocks[] = { + { .id = "jpgdec-smi" }, + { .id = "jpgdec" }, +}; + static const struct mtk_jpeg_variant mt8173_jpeg_drvdata = { .clks = mt8173_jpeg_dec_clocks, .num_clks = ARRAY_SIZE(mt8173_jpeg_dec_clocks), @@ -1949,14 +1948,13 @@ static const struct of_device_id mtk_jpeg_match[] = { }; MODULE_DEVICE_TABLE(of, mtk_jpeg_match); -#endif static struct platform_driver mtk_jpeg_driver = { .probe = mtk_jpeg_probe, .remove_new = mtk_jpeg_remove, .driver = { .name = MTK_JPEG_NAME, - .of_match_table = of_match_ptr(mtk_jpeg_match), + .of_match_table = mtk_jpeg_match, .pm = &mtk_jpeg_pm_ops, }, }; diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c index 869068fac5e2f..baa7be58ce691 100644 --- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c +++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c @@ -39,7 +39,6 @@ enum mtk_jpeg_color { MTK_JPEG_COLOR_400 = 0x00110000 }; -#if defined(CONFIG_OF) static const struct of_device_id mtk_jpegdec_hw_ids[] = { { .compatible = "mediatek,mt8195-jpgdec-hw", @@ -47,7 +46,6 @@ static const struct of_device_id mtk_jpegdec_hw_ids[] = { {}, }; MODULE_DEVICE_TABLE(of, mtk_jpegdec_hw_ids); -#endif static inline int mtk_jpeg_verify_align(u32 val, int align, u32 reg) { @@ -653,7 +651,7 @@ static struct platform_driver mtk_jpegdec_hw_driver = { .probe = mtk_jpegdec_hw_probe, .driver = { .name = "mtk-jpegdec-hw", - .of_match_table = of_match_ptr(mtk_jpegdec_hw_ids), + .of_match_table = mtk_jpegdec_hw_ids, }, }; diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c index 71e85b4bbf127..244018365b6f1 100644 --- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c +++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_enc_hw.c @@ -46,7 +46,6 @@ static const struct mtk_jpeg_enc_qlt mtk_jpeg_enc_quality[] = { {.quality_param = 97, .hardware_value = JPEG_ENC_QUALITY_Q97}, }; -#if defined(CONFIG_OF) static const struct of_device_id mtk_jpegenc_drv_ids[] = { { .compatible = "mediatek,mt8195-jpgenc-hw", @@ -54,7 +53,6 @@ static const struct of_device_id mtk_jpegenc_drv_ids[] = { {}, }; MODULE_DEVICE_TABLE(of, mtk_jpegenc_drv_ids); -#endif void mtk_jpeg_enc_reset(void __iomem *base) { @@ -377,7 +375,7 @@ static struct platform_driver mtk_jpegenc_hw_driver = { .probe = mtk_jpegenc_hw_probe, .driver = { .name = "mtk-jpegenc-hw", - .of_match_table = of_match_ptr(mtk_jpegenc_drv_ids), + .of_match_table = mtk_jpegenc_drv_ids, }, }; diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 091e035c76a6f..1a0776f9b008a 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1507,6 +1507,11 @@ static void bond_setup_by_slave(struct net_device *bond_dev, memcpy(bond_dev->broadcast, slave_dev->broadcast, slave_dev->addr_len); + + if (slave_dev->flags & IFF_POINTOPOINT) { + bond_dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + bond_dev->flags |= (IFF_POINTOPOINT | IFF_NOARP); + } } /* On bonding slaves other than the currently active slave, suppress diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c index f418066569fcc..bd9eb066ecf15 100644 --- a/drivers/net/can/usb/gs_usb.c +++ b/drivers/net/can/usb/gs_usb.c @@ -1030,6 +1030,8 @@ static int gs_can_close(struct net_device *netdev) usb_kill_anchored_urbs(&dev->tx_submitted); atomic_set(&dev->active_tx_urbs, 0); + dev->can.state = CAN_STATE_STOPPED; + /* reset the device */ rc = gs_cmd_reset(dev); if (rc < 0) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index d775a14784f7e..613af28663d79 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -576,8 +576,11 @@ static struct regmap_config qca8k_regmap_config = { .rd_table = &qca8k_readable_table, .disable_locking = true, /* Locking is handled by qca8k read/write */ .cache_type = REGCACHE_NONE, /* Explicitly disable CACHE */ - .max_raw_read = 32, /* mgmt eth can read/write up to 8 registers at time */ - .max_raw_write = 32, + .max_raw_read = 32, /* mgmt eth can read up to 8 registers at time */ + /* ATU regs suffer from a bug where some data are not correctly + * written. Disable bulk write to correctly write ATU entry. + */ + .use_single_write = true, }; static int diff --git a/drivers/net/dsa/qca/qca8k-common.c b/drivers/net/dsa/qca/qca8k-common.c index 96773e4325582..8536c4f6363e9 100644 --- a/drivers/net/dsa/qca/qca8k-common.c +++ b/drivers/net/dsa/qca/qca8k-common.c @@ -244,7 +244,7 @@ void qca8k_fdb_flush(struct qca8k_priv *priv) } static int qca8k_fdb_search_and_insert(struct qca8k_priv *priv, u8 port_mask, - const u8 *mac, u16 vid) + const u8 *mac, u16 vid, u8 aging) { struct qca8k_fdb fdb = { 0 }; int ret; @@ -261,10 +261,12 @@ static int qca8k_fdb_search_and_insert(struct qca8k_priv *priv, u8 port_mask, goto exit; /* Rule exist. Delete first */ - if (!fdb.aging) { + if (fdb.aging) { ret = qca8k_fdb_access(priv, QCA8K_FDB_PURGE, -1); if (ret) goto exit; + } else { + fdb.aging = aging; } /* Add port to fdb portmask */ @@ -291,6 +293,10 @@ static int qca8k_fdb_search_and_del(struct qca8k_priv *priv, u8 port_mask, if (ret < 0) goto exit; + ret = qca8k_fdb_read(priv, &fdb); + if (ret < 0) + goto exit; + /* Rule doesn't exist. Why delete? */ if (!fdb.aging) { ret = -EINVAL; @@ -810,7 +816,11 @@ int qca8k_port_mdb_add(struct dsa_switch *ds, int port, const u8 *addr = mdb->addr; u16 vid = mdb->vid; - return qca8k_fdb_search_and_insert(priv, BIT(port), addr, vid); + if (!vid) + vid = QCA8K_PORT_VID_DEF; + + return qca8k_fdb_search_and_insert(priv, BIT(port), addr, vid, + QCA8K_ATU_STATUS_STATIC); } int qca8k_port_mdb_del(struct dsa_switch *ds, int port, @@ -821,6 +831,9 @@ int qca8k_port_mdb_del(struct dsa_switch *ds, int port, const u8 *addr = mdb->addr; u16 vid = mdb->vid; + if (!vid) + vid = QCA8K_PORT_VID_DEF; + return qca8k_fdb_search_and_del(priv, BIT(port), addr, vid); } diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c index 5db0f3495a32e..5935be190b9e2 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c @@ -1641,8 +1641,11 @@ static int atl1e_tso_csum(struct atl1e_adapter *adapter, real_len = (((unsigned char *)ip_hdr(skb) - skb->data) + ntohs(ip_hdr(skb)->tot_len)); - if (real_len < skb->len) - pskb_trim(skb, real_len); + if (real_len < skb->len) { + err = pskb_trim(skb, real_len); + if (err) + return err; + } hdr_len = skb_tcp_all_headers(skb); if (unlikely(skb->len == hdr_len)) { diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c index c8444bcdf5270..02aa6fd8ebc2d 100644 --- a/drivers/net/ethernet/atheros/atlx/atl1.c +++ b/drivers/net/ethernet/atheros/atlx/atl1.c @@ -2113,8 +2113,11 @@ static int atl1_tso(struct atl1_adapter *adapter, struct sk_buff *skb, real_len = (((unsigned char *)iph - skb->data) + ntohs(iph->tot_len)); - if (real_len < skb->len) - pskb_trim(skb, real_len); + if (real_len < skb->len) { + err = pskb_trim(skb, real_len); + if (err) + return err; + } hdr_len = skb_tcp_all_headers(skb); if (skb->len == hdr_len) { iph->check = 0; diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 0defd519ba62e..7fa057d379c1a 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -1138,7 +1138,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter, (lancer_chip(adapter) || BE3_chip(adapter) || skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) { ip = (struct iphdr *)ip_hdr(skb); - pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len)); + if (unlikely(pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len)))) + goto tx_drop; } /* If vlan tag is already inlined in the packet, skip HW VLAN diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 7659888a96917..92410f30ad241 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1372,7 +1372,7 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts, } static void -fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) +fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget) { struct fec_enet_private *fep; struct xdp_frame *xdpf; @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) if (!skb) goto tx_buf_done; } else { + /* Tx processing cannot call any XDP (or page pool) APIs if + * the "budget" is 0. Because NAPI is called with budget of + * 0 (such as netpoll) indicates we may be in an IRQ context, + * however, we can't use the page pool from IRQ context. + */ + if (unlikely(!budget)) + break; + xdpf = txq->tx_buf[index].xdp; if (bdp->cbd_bufaddr) dma_unmap_single(&fep->pdev->dev, @@ -1508,14 +1516,14 @@ tx_buf_done: writel(0, txq->bd.reg_desc_active); } -static void fec_enet_tx(struct net_device *ndev) +static void fec_enet_tx(struct net_device *ndev, int budget) { struct fec_enet_private *fep = netdev_priv(ndev); int i; /* Make sure that AVB queues are processed first. */ for (i = fep->num_tx_queues - 1; i >= 0; i--) - fec_enet_tx_queue(ndev, i); + fec_enet_tx_queue(ndev, i, budget); } static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget) do { done += fec_enet_rx(ndev, budget - done); - fec_enet_tx(ndev); + fec_enet_tx(ndev, budget); } while ((done < budget) && fec_enet_collect_events(fep)); if (done < budget) { @@ -3908,6 +3916,8 @@ static int fec_enet_xdp_xmit(struct net_device *dev, __netif_tx_lock(nq, cpu); + /* Avoid tx timeout as XDP shares the queue with kernel stack */ + txq_trans_cond_update(nq); for (i = 0; i < num_frames; i++) { if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) != 0) break; diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h index 9c9c72dc57e00..06f29e80104c0 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -407,7 +408,7 @@ struct hnae3_ae_dev { unsigned long hw_err_reset_req; struct hnae3_dev_specs dev_specs; u32 dev_version; - unsigned long caps[BITS_TO_LONGS(HNAE3_DEV_CAPS_MAX_NUM)]; + DECLARE_BITMAP(caps, HNAE3_DEV_CAPS_MAX_NUM); void *priv; }; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c index b85c412683ddc..16ba98ff2c9b1 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c @@ -171,6 +171,20 @@ static const struct hclge_comm_caps_bit_map hclge_vf_cmd_caps[] = { {HCLGE_COMM_CAP_GRO_B, HNAE3_DEV_SUPPORT_GRO_B}, }; +static void +hclge_comm_capability_to_bitmap(unsigned long *bitmap, __le32 *caps) +{ + const unsigned int words = HCLGE_COMM_QUERY_CAP_LENGTH; + u32 val[HCLGE_COMM_QUERY_CAP_LENGTH]; + unsigned int i; + + for (i = 0; i < words; i++) + val[i] = __le32_to_cpu(caps[i]); + + bitmap_from_arr32(bitmap, val, + HCLGE_COMM_QUERY_CAP_LENGTH * BITS_PER_TYPE(u32)); +} + static void hclge_comm_parse_capability(struct hnae3_ae_dev *ae_dev, bool is_pf, struct hclge_comm_query_version_cmd *cmd) @@ -179,11 +193,12 @@ hclge_comm_parse_capability(struct hnae3_ae_dev *ae_dev, bool is_pf, is_pf ? hclge_pf_cmd_caps : hclge_vf_cmd_caps; u32 size = is_pf ? ARRAY_SIZE(hclge_pf_cmd_caps) : ARRAY_SIZE(hclge_vf_cmd_caps); - u32 caps, i; + DECLARE_BITMAP(caps, HCLGE_COMM_QUERY_CAP_LENGTH * BITS_PER_TYPE(u32)); + u32 i; - caps = __le32_to_cpu(cmd->caps[0]); + hclge_comm_capability_to_bitmap(caps, cmd->caps); for (i = 0; i < size; i++) - if (hnae3_get_bit(caps, caps_map[i].imp_bit)) + if (test_bit(caps_map[i].imp_bit, caps)) set_bit(caps_map[i].local_bit, ae_dev->caps); } diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c index c4aded65e848b..09362823140d5 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c @@ -52,7 +52,10 @@ static void hclge_tm_info_to_ieee_ets(struct hclge_dev *hdev, for (i = 0; i < HNAE3_MAX_TC; i++) { ets->prio_tc[i] = hdev->tm_info.prio_tc[i]; - ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i]; + if (i < hdev->tm_info.num_tc) + ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i]; + else + ets->tc_tx_bw[i] = 0; if (hdev->tm_info.tc_info[i].tc_sch_mode == HCLGE_SCH_MODE_SP) @@ -123,7 +126,8 @@ static u8 hclge_ets_tc_changed(struct hclge_dev *hdev, struct ieee_ets *ets, } static int hclge_ets_sch_mode_validate(struct hclge_dev *hdev, - struct ieee_ets *ets, bool *changed) + struct ieee_ets *ets, bool *changed, + u8 tc_num) { bool has_ets_tc = false; u32 total_ets_bw = 0; @@ -137,6 +141,13 @@ static int hclge_ets_sch_mode_validate(struct hclge_dev *hdev, *changed = true; break; case IEEE_8021QAZ_TSA_ETS: + if (i >= tc_num) { + dev_err(&hdev->pdev->dev, + "tc%u is disabled, cannot set ets bw\n", + i); + return -EINVAL; + } + /* The hardware will switch to sp mode if bandwidth is * 0, so limit ets bandwidth must be greater than 0. */ @@ -176,7 +187,7 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets, if (ret) return ret; - ret = hclge_ets_sch_mode_validate(hdev, ets, changed); + ret = hclge_ets_sch_mode_validate(hdev, ets, changed, tc_num); if (ret) return ret; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c index 233c132dc513e..409db2e709651 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c @@ -693,8 +693,7 @@ static int hclge_dbg_dump_tc(struct hclge_dev *hdev, char *buf, int len) for (i = 0; i < HNAE3_MAX_TC; i++) { sch_mode_str = ets_weight->tc_weight[i] ? "dwrr" : "sp"; pos += scnprintf(buf + pos, len - pos, "%u %4s %3u\n", - i, sch_mode_str, - hdev->tm_info.pg_info[0].tc_dwrr[i]); + i, sch_mode_str, ets_weight->tc_weight[i]); } return 0; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c index 922c0da3660c7..150f146fa24fb 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c @@ -785,6 +785,7 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev) static void hclge_tm_pg_info_init(struct hclge_dev *hdev) { #define BW_PERCENT 100 +#define DEFAULT_BW_WEIGHT 1 u8 i; @@ -806,7 +807,7 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev) for (k = 0; k < hdev->tm_info.num_tc; k++) hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT; for (; k < HNAE3_MAX_TC; k++) - hdev->tm_info.pg_info[i].tc_dwrr[k] = 0; + hdev->tm_info.pg_info[i].tc_dwrr[k] = DEFAULT_BW_WEIGHT; } } diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c index 9954493cd4489..62497f5565c59 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c +++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c @@ -1839,7 +1839,7 @@ void i40e_dbg_pf_exit(struct i40e_pf *pf) void i40e_dbg_init(void) { i40e_dbg_root = debugfs_create_dir(i40e_driver_name, NULL); - if (!i40e_dbg_root) + if (IS_ERR(i40e_dbg_root)) pr_info("init of debugfs failed\n"); } diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index ba96312feb505..e48810e0627d2 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -3286,9 +3286,6 @@ static void iavf_adminq_task(struct work_struct *work) u32 val, oldval; u16 pending; - if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) - goto out; - if (!mutex_trylock(&adapter->crit_lock)) { if (adapter->state == __IAVF_REMOVE) return; @@ -3297,10 +3294,13 @@ static void iavf_adminq_task(struct work_struct *work) goto out; } + if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) + goto unlock; + event.buf_len = IAVF_MAX_AQ_BUF_SIZE; event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL); if (!event.msg_buf) - goto out; + goto unlock; do { ret = iavf_clean_arq_element(hw, &event, &pending); @@ -3315,7 +3315,6 @@ static void iavf_adminq_task(struct work_struct *work) if (pending != 0) memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE); } while (pending); - mutex_unlock(&adapter->crit_lock); if (iavf_is_reset_in_progress(adapter)) goto freedom; @@ -3359,6 +3358,8 @@ static void iavf_adminq_task(struct work_struct *work) freedom: kfree(event.msg_buf); +unlock: + mutex_unlock(&adapter->crit_lock); out: /* re-enable Admin queue interrupt cause */ iavf_misc_irq_enable(adapter); diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c index ead6d50fc0adc..8c6e13f87b7d3 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c @@ -1281,16 +1281,21 @@ ice_cfg_fdir_xtrct_seq(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp, ICE_FLOW_FLD_OFF_INVAL); } - /* add filter for outer headers */ fltr_idx = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT); + + assign_bit(fltr_idx, hw->fdir_perfect_fltr, perfect_filter); + + /* add filter for outer headers */ ret = ice_fdir_set_hw_fltr_rule(pf, seg, fltr_idx, ICE_FD_HW_SEG_NON_TUN); - if (ret == -EEXIST) - /* Rule already exists, free memory and continue */ - devm_kfree(dev, seg); - else if (ret) + if (ret == -EEXIST) { + /* Rule already exists, free memory and count as success */ + ret = 0; + goto err_exit; + } else if (ret) { /* could not write filter, free memory */ goto err_exit; + } /* make tunneled filter HW entries if possible */ memcpy(&tun_seg[1], seg, sizeof(*seg)); @@ -1305,18 +1310,13 @@ ice_cfg_fdir_xtrct_seq(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp, devm_kfree(dev, tun_seg); } - if (perfect_filter) - set_bit(fltr_idx, hw->fdir_perfect_fltr); - else - clear_bit(fltr_idx, hw->fdir_perfect_fltr); - return ret; err_exit: devm_kfree(dev, tun_seg); devm_kfree(dev, seg); - return -EOPNOTSUPP; + return ret; } /** @@ -1914,7 +1914,9 @@ int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd) input->comp_report = ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL; /* input struct is added to the HW filter list */ - ice_fdir_update_list_entry(pf, input, fsp->location); + ret = ice_fdir_update_list_entry(pf, input, fsp->location); + if (ret) + goto release_lock; ret = ice_fdir_write_all_fltr(pf, input, true); if (ret) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 496a4eb687b00..3ccf2fedc5af7 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -316,6 +316,33 @@ static void igc_clean_all_tx_rings(struct igc_adapter *adapter) igc_clean_tx_ring(adapter->tx_ring[i]); } +static void igc_disable_tx_ring_hw(struct igc_ring *ring) +{ + struct igc_hw *hw = &ring->q_vector->adapter->hw; + u8 idx = ring->reg_idx; + u32 txdctl; + + txdctl = rd32(IGC_TXDCTL(idx)); + txdctl &= ~IGC_TXDCTL_QUEUE_ENABLE; + txdctl |= IGC_TXDCTL_SWFLUSH; + wr32(IGC_TXDCTL(idx), txdctl); +} + +/** + * igc_disable_all_tx_rings_hw - Disable all transmit queue operation + * @adapter: board private structure + */ +static void igc_disable_all_tx_rings_hw(struct igc_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct igc_ring *tx_ring = adapter->tx_ring[i]; + + igc_disable_tx_ring_hw(tx_ring); + } +} + /** * igc_setup_tx_resources - allocate Tx resources (Descriptors) * @tx_ring: tx descriptor ring (for a specific queue) to setup @@ -5056,6 +5083,7 @@ void igc_down(struct igc_adapter *adapter) /* clear VLAN promisc flag so VFTA will be updated if necessary */ adapter->flags &= ~IGC_FLAG_VLAN_PROMISC; + igc_disable_all_tx_rings_hw(adapter); igc_clean_all_tx_rings(adapter); igc_clean_all_rx_rings(adapter); } @@ -7274,18 +7302,6 @@ void igc_enable_rx_ring(struct igc_ring *ring) igc_alloc_rx_buffers(ring, igc_desc_unused(ring)); } -static void igc_disable_tx_ring_hw(struct igc_ring *ring) -{ - struct igc_hw *hw = &ring->q_vector->adapter->hw; - u8 idx = ring->reg_idx; - u32 txdctl; - - txdctl = rd32(IGC_TXDCTL(idx)); - txdctl &= ~IGC_TXDCTL_QUEUE_ENABLE; - txdctl |= IGC_TXDCTL_SWFLUSH; - wr32(IGC_TXDCTL(idx), txdctl); -} - void igc_disable_tx_ring(struct igc_ring *ring) { igc_disable_tx_ring_hw(ring); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 1726297f2e0df..8eb9839a3ca69 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -8479,7 +8479,7 @@ static void ixgbe_atr(struct ixgbe_ring *ring, struct ixgbe_adapter *adapter = q_vector->adapter; if (unlikely(skb_tail_pointer(skb) < hdr.network + - VXLAN_HEADROOM)) + vxlan_headroom(0))) return; /* verify the port is recognized as VXLAN */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c index 6fe67f3a7f6f1..7e20282c12d00 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c @@ -218,13 +218,54 @@ void npc_config_secret_key(struct rvu *rvu, int blkaddr) void npc_program_mkex_hash(struct rvu *rvu, int blkaddr) { + struct npc_mcam_kex_hash *mh = rvu->kpu.mkex_hash; struct hw_cap *hwcap = &rvu->hw->cap; + u8 intf, ld, hdr_offset, byte_len; struct rvu_hwinfo *hw = rvu->hw; - u8 intf; + u64 cfg; + /* Check if hardware supports hash extraction */ if (!hwcap->npc_hash_extract) return; + /* Check if IPv6 source/destination address + * should be hash enabled. + * Hashing reduces 128bit SIP/DIP fields to 32bit + * so that 224 bit X2 key can be used for IPv6 based filters as well, + * which in turn results in more number of MCAM entries available for + * use. + * + * Hashing of IPV6 SIP/DIP is enabled in below scenarios + * 1. If the silicon variant supports hashing feature + * 2. If the number of bytes of IP addr being extracted is 4 bytes ie + * 32bit. The assumption here is that if user wants 8bytes of LSB of + * IP addr or full 16 bytes then his intention is not to use 32bit + * hash. + */ + for (intf = 0; intf < hw->npc_intfs; intf++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + cfg = rvu_read64(rvu, blkaddr, + NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, + NPC_LID_LC, + NPC_LT_LC_IP6, + ld)); + hdr_offset = FIELD_GET(NPC_HDR_OFFSET, cfg); + byte_len = FIELD_GET(NPC_BYTESM, cfg); + /* Hashing of IPv6 source/destination address should be + * enabled if, + * hdr_offset == 8 (offset of source IPv6 address) or + * hdr_offset == 24 (offset of destination IPv6) + * address) and the number of byte to be + * extracted is 4. As per hardware configuration + * byte_len should be == actual byte_len - 1. + * Hence byte_len is checked against 3 but nor 4. + */ + if ((hdr_offset == 8 || hdr_offset == 24) && byte_len == 3) + mh->lid_lt_ld_hash_en[intf][NPC_LID_LC][NPC_LT_LC_IP6][ld] = true; + } + } + + /* Update hash configuration if the field is hash enabled */ for (intf = 0; intf < hw->npc_intfs; intf++) { npc_program_mkex_hash_rx(rvu, blkaddr, intf); npc_program_mkex_hash_tx(rvu, blkaddr, intf); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h index a1c3d987b8044..57a09328d46b5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h @@ -70,8 +70,8 @@ static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = { [NIX_INTF_RX] = { [NPC_LID_LC] = { [NPC_LT_LC_IP6] = { - true, - true, + false, + false, }, }, }, @@ -79,8 +79,8 @@ static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = { [NIX_INTF_TX] = { [NPC_LID_LC] = { [NPC_LT_LC_IP6] = { - true, - true, + false, + false, }, }, }, diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index b69122686407d..31fdecb414b6f 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -623,6 +623,7 @@ struct rtl8169_private { int cfg9346_usage_count; unsigned supports_gmii:1; + unsigned aspm_manageable:1; dma_addr_t counters_phys_addr; struct rtl8169_counters *counters; struct rtl8169_tc_offsets tc_offset; @@ -2746,7 +2747,8 @@ static void rtl_hw_aspm_clkreq_enable(struct rtl8169_private *tp, bool enable) if (tp->mac_version < RTL_GIGA_MAC_VER_32) return; - if (enable) { + /* Don't enable ASPM in the chip if OS can't control ASPM */ + if (enable && tp->aspm_manageable) { /* On these chip versions ASPM can even harm * bus communication of other PCI devices. */ @@ -5156,6 +5158,16 @@ done: rtl_rar_set(tp, mac_addr); } +/* register is set if system vendor successfully tested ASPM 1.2 */ +static bool rtl_aspm_is_safe(struct rtl8169_private *tp) +{ + if (tp->mac_version >= RTL_GIGA_MAC_VER_61 && + r8168_mac_ocp_read(tp, 0xc0b2) & 0xf) + return true; + + return false; +} + static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) { struct rtl8169_private *tp; @@ -5227,6 +5239,19 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) tp->mac_version = chipset; + /* Disable ASPM L1 as that cause random device stop working + * problems as well as full system hangs for some PCIe devices users. + * Chips from RTL8168h partially have issues with L1.2, but seem + * to work fine with L1 and L1.1. + */ + if (rtl_aspm_is_safe(tp)) + rc = 0; + else if (tp->mac_version >= RTL_GIGA_MAC_VER_46) + rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); + else + rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1); + tp->aspm_manageable = !rc; + tp->dash_type = rtl_check_dash(tp); tp->cp_cmd = RTL_R16(tp, CPlusCmd) & CPCMD_MASK; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c index df41eac54058f..03ceb6a940732 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c @@ -240,13 +240,15 @@ void stmmac_dwmac4_set_mac_addr(void __iomem *ioaddr, const u8 addr[6], void stmmac_dwmac4_set_mac(void __iomem *ioaddr, bool enable) { u32 value = readl(ioaddr + GMAC_CONFIG); + u32 old_val = value; if (enable) value |= GMAC_CONFIG_RE | GMAC_CONFIG_TE; else value &= ~(GMAC_CONFIG_TE | GMAC_CONFIG_RE); - writel(value, ioaddr + GMAC_CONFIG); + if (value != old_val) + writel(value, ioaddr + GMAC_CONFIG); } void stmmac_dwmac4_get_mac_addr(void __iomem *ioaddr, unsigned char *addr, diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c index f0529c31d0b6e..7b637bb8b41c8 100644 --- a/drivers/net/ipa/ipa_table.c +++ b/drivers/net/ipa/ipa_table.c @@ -273,16 +273,15 @@ static int ipa_filter_reset(struct ipa *ipa, bool modem) if (ret) return ret; - ret = ipa_filter_reset_table(ipa, true, false, modem); - if (ret) + ret = ipa_filter_reset_table(ipa, false, true, modem); + if (ret || !ipa_table_hash_support(ipa)) return ret; - ret = ipa_filter_reset_table(ipa, false, true, modem); + ret = ipa_filter_reset_table(ipa, true, false, modem); if (ret) return ret; - ret = ipa_filter_reset_table(ipa, true, true, modem); - return ret; + return ipa_filter_reset_table(ipa, true, true, modem); } /* The AP routes and modem routes are each contiguous within the @@ -291,12 +290,13 @@ static int ipa_filter_reset(struct ipa *ipa, bool modem) * */ static int ipa_route_reset(struct ipa *ipa, bool modem) { + bool hash_support = ipa_table_hash_support(ipa); u32 modem_route_count = ipa->modem_route_count; struct gsi_trans *trans; u16 first; u16 count; - trans = ipa_cmd_trans_alloc(ipa, 4); + trans = ipa_cmd_trans_alloc(ipa, hash_support ? 4 : 2); if (!trans) { dev_err(&ipa->pdev->dev, "no transaction for %s route reset\n", @@ -313,10 +313,12 @@ static int ipa_route_reset(struct ipa *ipa, bool modem) } ipa_table_reset_add(trans, false, false, false, first, count); - ipa_table_reset_add(trans, false, true, false, first, count); - ipa_table_reset_add(trans, false, false, true, first, count); - ipa_table_reset_add(trans, false, true, true, first, count); + + if (hash_support) { + ipa_table_reset_add(trans, false, true, false, first, count); + ipa_table_reset_add(trans, false, true, true, first, count); + } gsi_trans_commit_wait(trans); diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index 4a53debf9d7c4..ed908165a8b4e 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -1746,6 +1746,7 @@ static const struct nla_policy macvlan_policy[IFLA_MACVLAN_MAX + 1] = { [IFLA_MACVLAN_MACADDR_COUNT] = { .type = NLA_U32 }, [IFLA_MACVLAN_BC_QUEUE_LEN] = { .type = NLA_U32 }, [IFLA_MACVLAN_BC_QUEUE_LEN_USED] = { .type = NLA_REJECT }, + [IFLA_MACVLAN_BC_CUTOFF] = { .type = NLA_S32 }, }; int macvlan_link_register(struct rtnl_link_ops *ops) diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c index 55d9d7acc32eb..d4bb90d768811 100644 --- a/drivers/net/phy/marvell10g.c +++ b/drivers/net/phy/marvell10g.c @@ -328,6 +328,13 @@ static int mv3310_power_up(struct phy_device *phydev) ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL, MV_V2_PORT_CTRL_PWRDOWN); + /* Sometimes, the power down bit doesn't clear immediately, and + * a read of this register causes the bit not to clear. Delay + * 100us to allow the PHY to come out of power down mode before + * the next access. + */ + udelay(100); + if (phydev->drv->phy_id != MARVELL_PHY_ID_88X3310 || priv->firmware_ver < 0x00030000) return ret; diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 555b0b1e9a789..d3dc22509ea58 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -2135,6 +2135,15 @@ static void team_setup_by_port(struct net_device *dev, dev->mtu = port_dev->mtu; memcpy(dev->broadcast, port_dev->broadcast, port_dev->addr_len); eth_hw_addr_inherit(dev, port_dev); + + if (port_dev->flags & IFF_POINTOPOINT) { + dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + dev->flags |= (IFF_POINTOPOINT | IFF_NOARP); + } else if ((port_dev->flags & (IFF_BROADCAST | IFF_MULTICAST)) == + (IFF_BROADCAST | IFF_MULTICAST)) { + dev->flags |= (IFF_BROADCAST | IFF_MULTICAST); + dev->flags &= ~(IFF_POINTOPOINT | IFF_NOARP); + } } static int team_dev_type_check_change(struct net_device *dev, diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 486b5849033dc..2336a0e4befa5 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -4110,6 +4110,8 @@ static int virtnet_probe(struct virtio_device *vdev) if (vi->has_rss || vi->has_rss_hash_report) virtnet_init_default_rss(vi); + _virtnet_set_queues(vi, vi->curr_queue_pairs); + /* serialize netdev register + virtio_device_ready() with ndo_open() */ rtnl_lock(); @@ -4148,8 +4150,6 @@ static int virtnet_probe(struct virtio_device *vdev) goto free_unregister_netdev; } - virtnet_set_queues(vi, vi->curr_queue_pairs); - /* Assume link up if device can't report link status, otherwise get link status from config. */ netif_carrier_off(dev); diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c index 561fe1b314f5f..7532cac2154c5 100644 --- a/drivers/net/vxlan/vxlan_core.c +++ b/drivers/net/vxlan/vxlan_core.c @@ -623,6 +623,32 @@ static int vxlan_fdb_append(struct vxlan_fdb *f, return 1; } +static bool vxlan_parse_gpe_proto(struct vxlanhdr *hdr, __be16 *protocol) +{ + struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)hdr; + + /* Need to have Next Protocol set for interfaces in GPE mode. */ + if (!gpe->np_applied) + return false; + /* "The initial version is 0. If a receiver does not support the + * version indicated it MUST drop the packet. + */ + if (gpe->version != 0) + return false; + /* "When the O bit is set to 1, the packet is an OAM packet and OAM + * processing MUST occur." However, we don't implement OAM + * processing, thus drop the packet. + */ + if (gpe->oam_flag) + return false; + + *protocol = tun_p_to_eth_p(gpe->next_protocol); + if (!*protocol) + return false; + + return true; +} + static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb, unsigned int off, struct vxlanhdr *vh, size_t hdrlen, @@ -649,26 +675,24 @@ static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb, return vh; } -static struct sk_buff *vxlan_gro_receive(struct sock *sk, - struct list_head *head, - struct sk_buff *skb) +static struct vxlanhdr *vxlan_gro_prepare_receive(struct sock *sk, + struct list_head *head, + struct sk_buff *skb, + struct gro_remcsum *grc) { - struct sk_buff *pp = NULL; struct sk_buff *p; struct vxlanhdr *vh, *vh2; unsigned int hlen, off_vx; - int flush = 1; struct vxlan_sock *vs = rcu_dereference_sk_user_data(sk); __be32 flags; - struct gro_remcsum grc; - skb_gro_remcsum_init(&grc); + skb_gro_remcsum_init(grc); off_vx = skb_gro_offset(skb); hlen = off_vx + sizeof(*vh); vh = skb_gro_header(skb, hlen, off_vx); if (unlikely(!vh)) - goto out; + return NULL; skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr)); @@ -676,12 +700,12 @@ static struct sk_buff *vxlan_gro_receive(struct sock *sk, if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) { vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr), - vh->vx_vni, &grc, + vh->vx_vni, grc, !!(vs->flags & VXLAN_F_REMCSUM_NOPARTIAL)); if (!vh) - goto out; + return NULL; } skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */ @@ -698,12 +722,48 @@ static struct sk_buff *vxlan_gro_receive(struct sock *sk, } } - pp = call_gro_receive(eth_gro_receive, head, skb); - flush = 0; + return vh; +} -out: +static struct sk_buff *vxlan_gro_receive(struct sock *sk, + struct list_head *head, + struct sk_buff *skb) +{ + struct sk_buff *pp = NULL; + struct gro_remcsum grc; + int flush = 1; + + if (vxlan_gro_prepare_receive(sk, head, skb, &grc)) { + pp = call_gro_receive(eth_gro_receive, head, skb); + flush = 0; + } skb_gro_flush_final_remcsum(skb, pp, flush, &grc); + return pp; +} + +static struct sk_buff *vxlan_gpe_gro_receive(struct sock *sk, + struct list_head *head, + struct sk_buff *skb) +{ + const struct packet_offload *ptype; + struct sk_buff *pp = NULL; + struct gro_remcsum grc; + struct vxlanhdr *vh; + __be16 protocol; + int flush = 1; + vh = vxlan_gro_prepare_receive(sk, head, skb, &grc); + if (vh) { + if (!vxlan_parse_gpe_proto(vh, &protocol)) + goto out; + ptype = gro_find_receive_by_type(protocol); + if (!ptype) + goto out; + pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb); + flush = 0; + } +out: + skb_gro_flush_final_remcsum(skb, pp, flush, &grc); return pp; } @@ -715,6 +775,21 @@ static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr)); } +static int vxlan_gpe_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) +{ + struct vxlanhdr *vh = (struct vxlanhdr *)(skb->data + nhoff); + const struct packet_offload *ptype; + int err = -ENOSYS; + __be16 protocol; + + if (!vxlan_parse_gpe_proto(vh, &protocol)) + return err; + ptype = gro_find_complete_by_type(protocol); + if (ptype) + err = ptype->callbacks.gro_complete(skb, nhoff + sizeof(struct vxlanhdr)); + return err; +} + static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, const u8 *mac, __u16 state, __be32 src_vni, __u16 ndm_flags) @@ -1525,35 +1600,6 @@ out: unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS; } -static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed, - __be16 *protocol, - struct sk_buff *skb, u32 vxflags) -{ - struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed; - - /* Need to have Next Protocol set for interfaces in GPE mode. */ - if (!gpe->np_applied) - return false; - /* "The initial version is 0. If a receiver does not support the - * version indicated it MUST drop the packet. - */ - if (gpe->version != 0) - return false; - /* "When the O bit is set to 1, the packet is an OAM packet and OAM - * processing MUST occur." However, we don't implement OAM - * processing, thus drop the packet. - */ - if (gpe->oam_flag) - return false; - - *protocol = tun_p_to_eth_p(gpe->next_protocol); - if (!*protocol) - return false; - - unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS; - return true; -} - static bool vxlan_set_mac(struct vxlan_dev *vxlan, struct vxlan_sock *vs, struct sk_buff *skb, __be32 vni) @@ -1655,8 +1701,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb) * used by VXLAN extensions if explicitly requested. */ if (vs->flags & VXLAN_F_GPE) { - if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags)) + if (!vxlan_parse_gpe_proto(&unparsed, &protocol)) goto drop; + unparsed.vx_flags &= ~VXLAN_GPE_USED_BITS; raw_proto = true; } @@ -2515,7 +2562,7 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, } ndst = &rt->dst; - err = skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM, + err = skb_tunnel_check_pmtu(skb, ndst, vxlan_headroom(flags & VXLAN_F_GPE), netif_is_any_bridge_port(dev)); if (err < 0) { goto tx_error; @@ -2576,7 +2623,8 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, goto out_unlock; } - err = skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM, + err = skb_tunnel_check_pmtu(skb, ndst, + vxlan_headroom((flags & VXLAN_F_GPE) | VXLAN_F_IPV6), netif_is_any_bridge_port(dev)); if (err < 0) { goto tx_error; @@ -2988,14 +3036,12 @@ static int vxlan_change_mtu(struct net_device *dev, int new_mtu) struct vxlan_rdst *dst = &vxlan->default_dst; struct net_device *lowerdev = __dev_get_by_index(vxlan->net, dst->remote_ifindex); - bool use_ipv6 = !!(vxlan->cfg.flags & VXLAN_F_IPV6); /* This check is different than dev->max_mtu, because it looks at * the lowerdev->mtu, rather than the static dev->max_mtu */ if (lowerdev) { - int max_mtu = lowerdev->mtu - - (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM); + int max_mtu = lowerdev->mtu - vxlan_headroom(vxlan->cfg.flags); if (new_mtu > max_mtu) return -EINVAL; } @@ -3376,8 +3422,13 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6, tunnel_cfg.encap_rcv = vxlan_rcv; tunnel_cfg.encap_err_lookup = vxlan_err_lookup; tunnel_cfg.encap_destroy = NULL; - tunnel_cfg.gro_receive = vxlan_gro_receive; - tunnel_cfg.gro_complete = vxlan_gro_complete; + if (vs->flags & VXLAN_F_GPE) { + tunnel_cfg.gro_receive = vxlan_gpe_gro_receive; + tunnel_cfg.gro_complete = vxlan_gpe_gro_complete; + } else { + tunnel_cfg.gro_receive = vxlan_gro_receive; + tunnel_cfg.gro_complete = vxlan_gro_complete; + } setup_udp_tunnel_sock(net, sock, &tunnel_cfg); @@ -3641,11 +3692,11 @@ static void vxlan_config_apply(struct net_device *dev, struct vxlan_dev *vxlan = netdev_priv(dev); struct vxlan_rdst *dst = &vxlan->default_dst; unsigned short needed_headroom = ETH_HLEN; - bool use_ipv6 = !!(conf->flags & VXLAN_F_IPV6); int max_mtu = ETH_MAX_MTU; + u32 flags = conf->flags; if (!changelink) { - if (conf->flags & VXLAN_F_GPE) + if (flags & VXLAN_F_GPE) vxlan_raw_setup(dev); else vxlan_ether_setup(dev); @@ -3670,8 +3721,7 @@ static void vxlan_config_apply(struct net_device *dev, dev->needed_tailroom = lowerdev->needed_tailroom; - max_mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : - VXLAN_HEADROOM); + max_mtu = lowerdev->mtu - vxlan_headroom(flags); if (max_mtu < ETH_MIN_MTU) max_mtu = ETH_MIN_MTU; @@ -3682,10 +3732,9 @@ static void vxlan_config_apply(struct net_device *dev, if (dev->mtu > max_mtu) dev->mtu = max_mtu; - if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA) - needed_headroom += VXLAN6_HEADROOM; - else - needed_headroom += VXLAN_HEADROOM; + if (flags & VXLAN_F_COLLECT_METADATA) + flags |= VXLAN_F_IPV6; + needed_headroom += vxlan_headroom(flags); dev->needed_headroom = needed_headroom; memcpy(&vxlan->cfg, conf, sizeof(*conf)); diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index 827d91e73efab..0af0e965fb57e 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c @@ -61,65 +61,32 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip, ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region)); rockchip_pcie_write(rockchip, 0, ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region)); } static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn, - u32 r, u32 type, u64 cpu_addr, - u64 pci_addr, size_t size) + u32 r, u64 cpu_addr, u64 pci_addr, + size_t size) { - u64 sz = 1ULL << fls64(size - 1); - int num_pass_bits = ilog2(sz); - u32 addr0, addr1, desc0, desc1; - bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG); + int num_pass_bits = fls64(size - 1); + u32 addr0, addr1, desc0; - /* The minimal region size is 1MB */ if (num_pass_bits < 8) num_pass_bits = 8; - cpu_addr -= rockchip->mem_res->start; - addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) & - PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | - (lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); - addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr); - desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type; - desc1 = 0; - - if (is_nor_msg) { - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); - rockchip_pcie_write(rockchip, 0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); - rockchip_pcie_write(rockchip, desc0, - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); - rockchip_pcie_write(rockchip, desc1, - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); - } else { - /* PCI bus address region */ - rockchip_pcie_write(rockchip, addr0, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); - rockchip_pcie_write(rockchip, addr1, - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); - rockchip_pcie_write(rockchip, desc0, - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); - rockchip_pcie_write(rockchip, desc1, - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); - - addr0 = - ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | - (lower_32_bits(cpu_addr) & - PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); - addr1 = upper_32_bits(cpu_addr); - } + addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | + (lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); + addr1 = upper_32_bits(pci_addr); + desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | AXI_WRAPPER_MEM_WRITE; - /* CPU bus address region */ + /* PCI bus address region */ rockchip_pcie_write(rockchip, addr0, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r)); + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); rockchip_pcie_write(rockchip, addr1, - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r)); + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); + rockchip_pcie_write(rockchip, desc0, + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); + rockchip_pcie_write(rockchip, 0, + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); } static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, @@ -258,26 +225,20 @@ static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); } +static inline u32 rockchip_ob_region(phys_addr_t addr) +{ + return (addr >> ilog2(SZ_1M)) & 0x1f; +} + static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, phys_addr_t addr, u64 pci_addr, size_t size) { struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); struct rockchip_pcie *pcie = &ep->rockchip; - u32 r; - - r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); - /* - * Region 0 is reserved for configuration space and shouldn't - * be used elsewhere per TRM, so leave it out. - */ - if (r >= ep->max_regions - 1) { - dev_err(&epc->dev, "no free outbound region\n"); - return -EINVAL; - } + u32 r = rockchip_ob_region(addr); - rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr, - pci_addr, size); + rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, addr, pci_addr, size); set_bit(r, &ep->ob_region_map); ep->ob_addr[r] = addr; @@ -292,15 +253,11 @@ static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, struct rockchip_pcie *rockchip = &ep->rockchip; u32 r; - for (r = 0; r < ep->max_regions - 1; r++) + for (r = 0; r < ep->max_regions; r++) if (ep->ob_addr[r] == addr) break; - /* - * Region 0 is reserved for configuration space and shouldn't - * be used elsewhere per TRM, so leave it out. - */ - if (r == ep->max_regions - 1) + if (r == ep->max_regions) return; rockchip_pcie_clear_ep_ob_atu(rockchip, r); @@ -397,7 +354,8 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, struct rockchip_pcie *rockchip = &ep->rockchip; u32 flags, mme, data, data_mask; u8 msi_count; - u64 pci_addr, pci_addr_mask = 0xff; + u64 pci_addr; + u32 r; /* Check MSI enable bit */ flags = rockchip_pcie_read(&ep->rockchip, @@ -431,21 +389,20 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn, ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + ROCKCHIP_PCIE_EP_MSI_CTRL_REG + PCI_MSI_ADDRESS_LO); - pci_addr &= GENMASK_ULL(63, 2); /* Set the outbound region if needed. */ - if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || + if (unlikely(ep->irq_pci_addr != (pci_addr & PCIE_ADDR_MASK) || ep->irq_pci_fn != fn)) { - rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1, - AXI_WRAPPER_MEM_WRITE, + r = rockchip_ob_region(ep->irq_phys_addr); + rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, ep->irq_phys_addr, - pci_addr & ~pci_addr_mask, - pci_addr_mask + 1); - ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); + pci_addr & PCIE_ADDR_MASK, + ~PCIE_ADDR_MASK + 1); + ep->irq_pci_addr = (pci_addr & PCIE_ADDR_MASK); ep->irq_pci_fn = fn; } - writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); + writew(data, ep->irq_cpu_addr + (pci_addr & ~PCIE_ADDR_MASK)); return 0; } @@ -527,6 +484,8 @@ static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip, if (err < 0 || ep->max_regions > MAX_REGION_LIMIT) ep->max_regions = MAX_REGION_LIMIT; + ep->ob_region_map = 0; + err = of_property_read_u8(dev->of_node, "max-functions", &ep->epc->max_functions); if (err < 0) @@ -547,7 +506,9 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) struct rockchip_pcie *rockchip; struct pci_epc *epc; size_t max_regions; - int err; + struct pci_epc_mem_window *windows = NULL; + int err, i; + u32 cfg_msi, cfg_msix_cp; ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); if (!ep) @@ -594,15 +555,27 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) /* Only enable function 0 by default */ rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); - err = pci_epc_mem_init(epc, rockchip->mem_res->start, - resource_size(rockchip->mem_res), PAGE_SIZE); + windows = devm_kcalloc(dev, ep->max_regions, + sizeof(struct pci_epc_mem_window), GFP_KERNEL); + if (!windows) { + err = -ENOMEM; + goto err_uninit_port; + } + for (i = 0; i < ep->max_regions; i++) { + windows[i].phys_base = rockchip->mem_res->start + (SZ_1M * i); + windows[i].size = SZ_1M; + windows[i].page_size = SZ_1M; + } + err = pci_epc_multi_mem_init(epc, windows, ep->max_regions); + devm_kfree(dev, windows); + if (err < 0) { dev_err(dev, "failed to initialize the memory space\n"); goto err_uninit_port; } ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr, - SZ_128K); + SZ_1M); if (!ep->irq_cpu_addr) { dev_err(dev, "failed to reserve memory space for MSI\n"); err = -ENOMEM; @@ -611,6 +584,29 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev) ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR; + /* + * MSI-X is not supported but the controller still advertises the MSI-X + * capability by default, which can lead to the Root Complex side + * allocating MSI-X vectors which cannot be used. Avoid this by skipping + * the MSI-X capability entry in the PCIe capabilities linked-list: get + * the next pointer from the MSI-X entry and set that in the MSI + * capability entry (which is the previous entry). This way the MSI-X + * entry is skipped (left out of the linked-list) and not advertised. + */ + cfg_msi = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); + + cfg_msi &= ~ROCKCHIP_PCIE_EP_MSI_CP1_MASK; + + cfg_msix_cp = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + + ROCKCHIP_PCIE_EP_MSIX_CAP_REG) & + ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK; + + cfg_msi |= cfg_msix_cp; + + rockchip_pcie_write(rockchip, cfg_msi, + PCIE_EP_CONFIG_BASE + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); + rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE, PCIE_CLIENT_CONFIG); diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h index 8e92dc3339ecc..fe0333778fd93 100644 --- a/drivers/pci/controller/pcie-rockchip.h +++ b/drivers/pci/controller/pcie-rockchip.h @@ -139,6 +139,7 @@ #define PCIE_RC_RP_ATS_BASE 0x400000 #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 +#define PCIE_EP_PF_CONFIG_REGS_BASE 0x800000 #define PCIE_RC_CONFIG_BASE 0xa00000 #define PCIE_EP_CONFIG_BASE 0xa00000 #define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00) @@ -157,10 +158,11 @@ #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) +#define PCIE_ADDR_MASK 0xffffff00 #define PCIE_CORE_AXI_CONF_BASE 0xc00000 #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f -#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00 +#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK #define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4) #define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8) #define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc) @@ -168,7 +170,7 @@ #define PCIE_CORE_AXI_INBOUND_BASE 0xc00800 #define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0) #define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f -#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00 +#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK #define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4) /* Size of one AXI Region (not Region 0) */ @@ -225,6 +227,8 @@ #define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4 #define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19) #define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90 +#define ROCKCHIP_PCIE_EP_MSI_CP1_OFFSET 8 +#define ROCKCHIP_PCIE_EP_MSI_CP1_MASK GENMASK(15, 8) #define ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET 16 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17) @@ -232,14 +236,19 @@ #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20) #define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16) #define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24) +#define ROCKCHIP_PCIE_EP_MSIX_CAP_REG 0xb0 +#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_OFFSET 8 +#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK GENMASK(15, 8) #define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1 -#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) +#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3 +#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) \ + (PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12))) +#define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \ + (PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12))) #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \ - (PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008) + (PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008) #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \ - (PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008) -#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020) + (PCIE_CORE_AXI_CONF_BASE + 0x082c + (fn) * 0x0040 + (bar) * 0x0008) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ (((devfn) << 12) & \ @@ -247,20 +256,21 @@ #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ (((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK) +#define PCIE_RC_EP_ATR_OB_REGIONS_1_32 (PCIE_CORE_AXI_CONF_BASE + 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0000 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020) + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0004 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \ (((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK) #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \ - (PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020) -#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \ - (PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020) + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0008 + ((r) & 0x1f) * 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x000c + ((r) & 0x1f) * 0x0020) +#define ROCKCHIP_PCIE_AT_OB_REGION_DESC2(r) \ + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0010 + ((r) & 0x1f) * 0x0020) #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \ (PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008) diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index db32335039d61..998e26de2ad76 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -193,12 +193,39 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) link->clkpm_disable = blacklist ? 1 : 0; } -static bool pcie_retrain_link(struct pcie_link_state *link) +static int pcie_wait_for_retrain(struct pci_dev *pdev) { - struct pci_dev *parent = link->pdev; unsigned long end_jiffies; u16 reg16; + /* Wait for Link Training to be cleared by hardware */ + end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; + do { + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, ®16); + if (!(reg16 & PCI_EXP_LNKSTA_LT)) + return 0; + msleep(1); + } while (time_before(jiffies, end_jiffies)); + + return -ETIMEDOUT; +} + +static int pcie_retrain_link(struct pcie_link_state *link) +{ + struct pci_dev *parent = link->pdev; + int rc; + u16 reg16; + + /* + * Ensure the updated LNKCTL parameters are used during link + * training by checking that there is no ongoing link training to + * avoid LTSSM race as recommended in Implementation Note at the + * end of PCIe r6.0.1 sec 7.5.3.7. + */ + rc = pcie_wait_for_retrain(parent); + if (rc) + return rc; + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16); reg16 |= PCI_EXP_LNKCTL_RL; pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); @@ -212,15 +239,7 @@ static bool pcie_retrain_link(struct pcie_link_state *link) pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); } - /* Wait for link training end. Break out after waiting for timeout */ - end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; - do { - pcie_capability_read_word(parent, PCI_EXP_LNKSTA, ®16); - if (!(reg16 & PCI_EXP_LNKSTA_LT)) - break; - msleep(1); - } while (time_before(jiffies, end_jiffies)); - return !(reg16 & PCI_EXP_LNKSTA_LT); + return pcie_wait_for_retrain(parent); } /* @@ -289,15 +308,15 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) reg16 &= ~PCI_EXP_LNKCTL_CCC; pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); - if (pcie_retrain_link(link)) - return; + if (pcie_retrain_link(link)) { - /* Training failed. Restore common clock configurations */ - pci_err(parent, "ASPM: Could not configure common clock\n"); - list_for_each_entry(child, &linkbus->devices, bus_list) - pcie_capability_write_word(child, PCI_EXP_LNKCTL, + /* Training failed. Restore common clock configurations */ + pci_err(parent, "ASPM: Could not configure common clock\n"); + list_for_each_entry(child, &linkbus->devices, bus_list) + pcie_capability_write_word(child, PCI_EXP_LNKCTL, child_reg[PCI_FUNC(child->devfn)]); - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); + pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); + } } /* Convert L0s latency encoding to ns */ diff --git a/drivers/phy/hisilicon/phy-hisi-inno-usb2.c b/drivers/phy/hisilicon/phy-hisi-inno-usb2.c index b133ae06757ab..a922fb11a1092 100644 --- a/drivers/phy/hisilicon/phy-hisi-inno-usb2.c +++ b/drivers/phy/hisilicon/phy-hisi-inno-usb2.c @@ -158,7 +158,7 @@ static int hisi_inno_phy_probe(struct platform_device *pdev) phy_set_drvdata(phy, &priv->ports[i]); i++; - if (i > INNO_PHY_PORT_NUM) { + if (i >= INNO_PHY_PORT_NUM) { dev_warn(dev, "Support %d ports in maximum\n", i); of_node_put(child); break; diff --git a/drivers/phy/mediatek/phy-mtk-dp.c b/drivers/phy/mediatek/phy-mtk-dp.c index 232fd3f1ff1b1..d7024a1443358 100644 --- a/drivers/phy/mediatek/phy-mtk-dp.c +++ b/drivers/phy/mediatek/phy-mtk-dp.c @@ -169,7 +169,7 @@ static int mtk_dp_phy_probe(struct platform_device *pdev) regs = *(struct regmap **)dev->platform_data; if (!regs) - return dev_err_probe(dev, EINVAL, + return dev_err_probe(dev, -EINVAL, "No data passed, requires struct regmap**\n"); dp_phy = devm_kzalloc(dev, sizeof(*dp_phy), GFP_KERNEL); diff --git a/drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c b/drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c index 8aa7251de4a96..bbfe11d6a69d7 100644 --- a/drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c +++ b/drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c @@ -253,7 +253,7 @@ static int mtk_hdmi_pll_calc(struct mtk_hdmi_phy *hdmi_phy, struct clk_hw *hw, for (i = 0; i < ARRAY_SIZE(txpredivs); i++) { ns_hdmipll_ck = 5 * tmds_clk * txposdiv * txpredivs[i]; if (ns_hdmipll_ck >= 5 * GIGA && - ns_hdmipll_ck <= 1 * GIGA) + ns_hdmipll_ck <= 12 * GIGA) break; } if (i == (ARRAY_SIZE(txpredivs) - 1) && diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c index 6c237f3cc66db..6170f8fd118e2 100644 --- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c +++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c @@ -110,11 +110,13 @@ struct phy_override_seq { /** * struct qcom_snps_hsphy - snps hs phy attributes * + * @dev: device structure + * * @phy: generic phy * @base: iomapped memory space for snps hs phy * - * @cfg_ahb_clk: AHB2PHY interface clock - * @ref_clk: phy reference clock + * @num_clks: number of clocks + * @clks: array of clocks * @phy_reset: phy reset control * @vregs: regulator supplies bulk data * @phy_initialized: if PHY has been initialized correctly @@ -122,11 +124,13 @@ struct phy_override_seq { * @update_seq_cfg: tuning parameters for phy init */ struct qcom_snps_hsphy { + struct device *dev; + struct phy *phy; void __iomem *base; - struct clk *cfg_ahb_clk; - struct clk *ref_clk; + int num_clks; + struct clk_bulk_data *clks; struct reset_control *phy_reset; struct regulator_bulk_data vregs[SNPS_HS_NUM_VREGS]; @@ -135,6 +139,34 @@ struct qcom_snps_hsphy { struct phy_override_seq update_seq_cfg[NUM_HSPHY_TUNING_PARAMS]; }; +static int qcom_snps_hsphy_clk_init(struct qcom_snps_hsphy *hsphy) +{ + struct device *dev = hsphy->dev; + + hsphy->num_clks = 2; + hsphy->clks = devm_kcalloc(dev, hsphy->num_clks, sizeof(*hsphy->clks), GFP_KERNEL); + if (!hsphy->clks) + return -ENOMEM; + + /* + * TODO: Currently no device tree instantiation of the PHY is using the clock. + * This needs to be fixed in order for this code to be able to use devm_clk_bulk_get(). + */ + hsphy->clks[0].id = "cfg_ahb"; + hsphy->clks[0].clk = devm_clk_get_optional(dev, "cfg_ahb"); + if (IS_ERR(hsphy->clks[0].clk)) + return dev_err_probe(dev, PTR_ERR(hsphy->clks[0].clk), + "failed to get cfg_ahb clk\n"); + + hsphy->clks[1].id = "ref"; + hsphy->clks[1].clk = devm_clk_get(dev, "ref"); + if (IS_ERR(hsphy->clks[1].clk)) + return dev_err_probe(dev, PTR_ERR(hsphy->clks[1].clk), + "failed to get ref clk\n"); + + return 0; +} + static inline void qcom_snps_hsphy_write_mask(void __iomem *base, u32 offset, u32 mask, u32 val) { @@ -165,22 +197,13 @@ static int qcom_snps_hsphy_suspend(struct qcom_snps_hsphy *hsphy) 0, USB2_AUTO_RESUME); } - clk_disable_unprepare(hsphy->cfg_ahb_clk); return 0; } static int qcom_snps_hsphy_resume(struct qcom_snps_hsphy *hsphy) { - int ret; - dev_dbg(&hsphy->phy->dev, "Resume QCOM SNPS PHY, mode\n"); - ret = clk_prepare_enable(hsphy->cfg_ahb_clk); - if (ret) { - dev_err(&hsphy->phy->dev, "failed to enable cfg ahb clock\n"); - return ret; - } - return 0; } @@ -374,16 +397,16 @@ static int qcom_snps_hsphy_init(struct phy *phy) if (ret) return ret; - ret = clk_prepare_enable(hsphy->cfg_ahb_clk); + ret = clk_bulk_prepare_enable(hsphy->num_clks, hsphy->clks); if (ret) { - dev_err(&phy->dev, "failed to enable cfg ahb clock, %d\n", ret); + dev_err(&phy->dev, "failed to enable clocks, %d\n", ret); goto poweroff_phy; } ret = reset_control_assert(hsphy->phy_reset); if (ret) { dev_err(&phy->dev, "failed to assert phy_reset, %d\n", ret); - goto disable_ahb_clk; + goto disable_clks; } usleep_range(100, 150); @@ -391,7 +414,7 @@ static int qcom_snps_hsphy_init(struct phy *phy) ret = reset_control_deassert(hsphy->phy_reset); if (ret) { dev_err(&phy->dev, "failed to de-assert phy_reset, %d\n", ret); - goto disable_ahb_clk; + goto disable_clks; } qcom_snps_hsphy_write_mask(hsphy->base, USB2_PHY_USB_PHY_CFG0, @@ -448,8 +471,8 @@ static int qcom_snps_hsphy_init(struct phy *phy) return 0; -disable_ahb_clk: - clk_disable_unprepare(hsphy->cfg_ahb_clk); +disable_clks: + clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks); poweroff_phy: regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs); @@ -461,7 +484,7 @@ static int qcom_snps_hsphy_exit(struct phy *phy) struct qcom_snps_hsphy *hsphy = phy_get_drvdata(phy); reset_control_assert(hsphy->phy_reset); - clk_disable_unprepare(hsphy->cfg_ahb_clk); + clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks); regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs); hsphy->phy_initialized = false; @@ -554,14 +577,15 @@ static int qcom_snps_hsphy_probe(struct platform_device *pdev) if (!hsphy) return -ENOMEM; + hsphy->dev = dev; + hsphy->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(hsphy->base)) return PTR_ERR(hsphy->base); - hsphy->ref_clk = devm_clk_get(dev, "ref"); - if (IS_ERR(hsphy->ref_clk)) - return dev_err_probe(dev, PTR_ERR(hsphy->ref_clk), - "failed to get ref clk\n"); + ret = qcom_snps_hsphy_clk_init(hsphy); + if (ret) + return dev_err_probe(dev, ret, "failed to initialize clocks\n"); hsphy->phy_reset = devm_reset_control_get_exclusive(&pdev->dev, NULL); if (IS_ERR(hsphy->phy_reset)) { diff --git a/drivers/platform/x86/amd/pmf/acpi.c b/drivers/platform/x86/amd/pmf/acpi.c index 081e84e116e79..3fc5e4547d9f2 100644 --- a/drivers/platform/x86/amd/pmf/acpi.c +++ b/drivers/platform/x86/amd/pmf/acpi.c @@ -106,6 +106,27 @@ int apmf_get_static_slider_granular(struct amd_pmf_dev *pdev, data, sizeof(*data)); } +int apmf_os_power_slider_update(struct amd_pmf_dev *pdev, u8 event) +{ + struct os_power_slider args; + struct acpi_buffer params; + union acpi_object *info; + int err = 0; + + args.size = sizeof(args); + args.slider_event = event; + + params.length = sizeof(args); + params.pointer = (void *)&args; + + info = apmf_if_call(pdev, APMF_FUNC_OS_POWER_SLIDER_UPDATE, ¶ms); + if (!info) + err = -EIO; + + kfree(info); + return err; +} + static void apmf_sbios_heartbeat_notify(struct work_struct *work) { struct amd_pmf_dev *dev = container_of(work, struct amd_pmf_dev, heart_beat.work); @@ -289,7 +310,7 @@ int apmf_acpi_init(struct amd_pmf_dev *pmf_dev) ret = apmf_get_system_params(pmf_dev); if (ret) { - dev_err(pmf_dev->dev, "APMF apmf_get_system_params failed :%d\n", ret); + dev_dbg(pmf_dev->dev, "APMF apmf_get_system_params failed :%d\n", ret); goto out; } diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c index 7780705917b76..a022325161273 100644 --- a/drivers/platform/x86/amd/pmf/core.c +++ b/drivers/platform/x86/amd/pmf/core.c @@ -71,7 +71,11 @@ static int amd_pmf_pwr_src_notify_call(struct notifier_block *nb, unsigned long return NOTIFY_DONE; } - amd_pmf_set_sps_power_limits(pmf); + if (is_apmf_func_supported(pmf, APMF_FUNC_STATIC_SLIDER_GRANULAR)) + amd_pmf_set_sps_power_limits(pmf); + + if (is_apmf_func_supported(pmf, APMF_FUNC_OS_POWER_SLIDER_UPDATE)) + amd_pmf_power_slider_update_event(pmf); return NOTIFY_OK; } @@ -295,7 +299,8 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev) int ret; /* Enable Static Slider */ - if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { + if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR) || + is_apmf_func_supported(dev, APMF_FUNC_OS_POWER_SLIDER_UPDATE)) { amd_pmf_init_sps(dev); dev->pwr_src_notifier.notifier_call = amd_pmf_pwr_src_notify_call; power_supply_reg_notifier(&dev->pwr_src_notifier); diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h index 06c30cdc05733..deba88e6e4c8d 100644 --- a/drivers/platform/x86/amd/pmf/pmf.h +++ b/drivers/platform/x86/amd/pmf/pmf.h @@ -21,6 +21,7 @@ #define APMF_FUNC_SBIOS_HEARTBEAT 4 #define APMF_FUNC_AUTO_MODE 5 #define APMF_FUNC_SET_FAN_IDX 7 +#define APMF_FUNC_OS_POWER_SLIDER_UPDATE 8 #define APMF_FUNC_STATIC_SLIDER_GRANULAR 9 #define APMF_FUNC_DYN_SLIDER_AC 11 #define APMF_FUNC_DYN_SLIDER_DC 12 @@ -44,6 +45,14 @@ #define GET_STT_LIMIT_APU 0x20 #define GET_STT_LIMIT_HS2 0x21 +/* OS slider update notification */ +#define DC_BEST_PERF 0 +#define DC_BETTER_PERF 1 +#define DC_BATTERY_SAVER 3 +#define AC_BEST_PERF 4 +#define AC_BETTER_PERF 5 +#define AC_BETTER_BATTERY 6 + /* Fan Index for Auto Mode */ #define FAN_INDEX_AUTO 0xFFFFFFFF @@ -193,6 +202,11 @@ struct amd_pmf_static_slider_granular { struct apmf_sps_prop_granular prop[POWER_SOURCE_MAX][POWER_MODE_MAX]; }; +struct os_power_slider { + u16 size; + u8 slider_event; +} __packed; + struct fan_table_control { bool manual; unsigned long fan_id; @@ -383,6 +397,7 @@ int amd_pmf_send_cmd(struct amd_pmf_dev *dev, u8 message, bool get, u32 arg, u32 int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev); int amd_pmf_get_power_source(void); int apmf_install_handler(struct amd_pmf_dev *pmf_dev); +int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag); /* SPS Layer */ int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf); @@ -393,6 +408,7 @@ void amd_pmf_deinit_sps(struct amd_pmf_dev *dev); int apmf_get_static_slider_granular(struct amd_pmf_dev *pdev, struct apmf_static_slider_granular_output *output); bool is_pprof_balanced(struct amd_pmf_dev *pmf); +int amd_pmf_power_slider_update_event(struct amd_pmf_dev *dev); int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx); diff --git a/drivers/platform/x86/amd/pmf/sps.c b/drivers/platform/x86/amd/pmf/sps.c index bed762d47a14a..fd448844de206 100644 --- a/drivers/platform/x86/amd/pmf/sps.c +++ b/drivers/platform/x86/amd/pmf/sps.c @@ -119,14 +119,77 @@ int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf) return mode; } +int amd_pmf_power_slider_update_event(struct amd_pmf_dev *dev) +{ + u8 mode, flag = 0; + int src; + + mode = amd_pmf_get_pprof_modes(dev); + if (mode < 0) + return mode; + + src = amd_pmf_get_power_source(); + + if (src == POWER_SOURCE_AC) { + switch (mode) { + case POWER_MODE_PERFORMANCE: + flag |= BIT(AC_BEST_PERF); + break; + case POWER_MODE_BALANCED_POWER: + flag |= BIT(AC_BETTER_PERF); + break; + case POWER_MODE_POWER_SAVER: + flag |= BIT(AC_BETTER_BATTERY); + break; + default: + dev_err(dev->dev, "unsupported platform profile\n"); + return -EOPNOTSUPP; + } + + } else if (src == POWER_SOURCE_DC) { + switch (mode) { + case POWER_MODE_PERFORMANCE: + flag |= BIT(DC_BEST_PERF); + break; + case POWER_MODE_BALANCED_POWER: + flag |= BIT(DC_BETTER_PERF); + break; + case POWER_MODE_POWER_SAVER: + flag |= BIT(DC_BATTERY_SAVER); + break; + default: + dev_err(dev->dev, "unsupported platform profile\n"); + return -EOPNOTSUPP; + } + } + + apmf_os_power_slider_update(dev, flag); + + return 0; +} + static int amd_pmf_profile_set(struct platform_profile_handler *pprof, enum platform_profile_option profile) { struct amd_pmf_dev *pmf = container_of(pprof, struct amd_pmf_dev, pprof); + int ret = 0; pmf->current_profile = profile; - return amd_pmf_set_sps_power_limits(pmf); + /* Notify EC about the slider position change */ + if (is_apmf_func_supported(pmf, APMF_FUNC_OS_POWER_SLIDER_UPDATE)) { + ret = amd_pmf_power_slider_update_event(pmf); + if (ret) + return ret; + } + + if (is_apmf_func_supported(pmf, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { + ret = amd_pmf_set_sps_power_limits(pmf); + if (ret) + return ret; + } + + return 0; } int amd_pmf_init_sps(struct amd_pmf_dev *dev) @@ -134,10 +197,13 @@ int amd_pmf_init_sps(struct amd_pmf_dev *dev) int err; dev->current_profile = PLATFORM_PROFILE_BALANCED; - amd_pmf_load_defaults_sps(dev); - /* update SPS balanced power mode thermals */ - amd_pmf_set_sps_power_limits(dev); + if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { + amd_pmf_load_defaults_sps(dev); + + /* update SPS balanced power mode thermals */ + amd_pmf_set_sps_power_limits(dev); + } dev->pprof.profile_get = amd_pmf_profile_get; dev->pprof.profile_set = amd_pmf_profile_set; diff --git a/drivers/platform/x86/msi-laptop.c b/drivers/platform/x86/msi-laptop.c index 6b18ec543ac3a..f4c6c36e05a52 100644 --- a/drivers/platform/x86/msi-laptop.c +++ b/drivers/platform/x86/msi-laptop.c @@ -208,7 +208,7 @@ static ssize_t set_device_state(const char *buf, size_t count, u8 mask) return -EINVAL; if (quirks->ec_read_only) - return -EOPNOTSUPP; + return 0; /* read current device state */ result = ec_read(MSI_STANDARD_EC_COMMAND_ADDRESS, &rdata); @@ -838,15 +838,15 @@ static bool msi_laptop_i8042_filter(unsigned char data, unsigned char str, static void msi_init_rfkill(struct work_struct *ignored) { if (rfk_wlan) { - rfkill_set_sw_state(rfk_wlan, !wlan_s); + msi_rfkill_set_state(rfk_wlan, !wlan_s); rfkill_wlan_set(NULL, !wlan_s); } if (rfk_bluetooth) { - rfkill_set_sw_state(rfk_bluetooth, !bluetooth_s); + msi_rfkill_set_state(rfk_bluetooth, !bluetooth_s); rfkill_bluetooth_set(NULL, !bluetooth_s); } if (rfk_threeg) { - rfkill_set_sw_state(rfk_threeg, !threeg_s); + msi_rfkill_set_state(rfk_threeg, !threeg_s); rfkill_threeg_set(NULL, !threeg_s); } } diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c index 9fd36c4687064..f0f210627cadf 100644 --- a/drivers/s390/block/dasd_3990_erp.c +++ b/drivers/s390/block/dasd_3990_erp.c @@ -1050,7 +1050,7 @@ dasd_3990_erp_com_rej(struct dasd_ccw_req * erp, char *sense) dev_err(&device->cdev->dev, "An I/O request was rejected" " because writing is inhibited\n"); erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED); - } else if (sense[7] & SNS7_INVALID_ON_SEC) { + } else if (sense[7] == SNS7_INVALID_ON_SEC) { dev_err(&device->cdev->dev, "An I/O request was rejected on a copy pair secondary device\n"); /* suppress dump of sense data for this error */ set_bit(DASD_CQR_SUPPRESS_CR, &erp->refers->flags); diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c index 8fca725b3daec..87890b6efcdcf 100644 --- a/drivers/s390/block/dasd_ioctl.c +++ b/drivers/s390/block/dasd_ioctl.c @@ -131,6 +131,7 @@ static int dasd_ioctl_resume(struct dasd_block *block) spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags); dasd_schedule_block_bh(block); + dasd_schedule_device_bh(base); return 0; } diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c index 9fb7f91ca1827..21c638e38c51f 100644 --- a/drivers/soundwire/amd_manager.c +++ b/drivers/soundwire/amd_manager.c @@ -910,9 +910,9 @@ static int amd_sdw_manager_probe(struct platform_device *pdev) return -ENOMEM; amd_manager->acp_mmio = devm_ioremap(dev, res->start, resource_size(res)); - if (IS_ERR(amd_manager->mmio)) { + if (!amd_manager->acp_mmio) { dev_err(dev, "mmio not found\n"); - return PTR_ERR(amd_manager->mmio); + return -ENOMEM; } amd_manager->instance = pdata->instance; amd_manager->mmio = amd_manager->acp_mmio + diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c index 1ea6a64f8c4a5..66e5dba919faa 100644 --- a/drivers/soundwire/bus.c +++ b/drivers/soundwire/bus.c @@ -908,8 +908,8 @@ static void sdw_modify_slave_status(struct sdw_slave *slave, "initializing enumeration and init completion for Slave %d\n", slave->dev_num); - init_completion(&slave->enumeration_complete); - init_completion(&slave->initialization_complete); + reinit_completion(&slave->enumeration_complete); + reinit_completion(&slave->initialization_complete); } else if ((status == SDW_SLAVE_ATTACHED) && (slave->status == SDW_SLAVE_UNATTACHED)) { @@ -917,7 +917,7 @@ static void sdw_modify_slave_status(struct sdw_slave *slave, "signaling enumeration completion for Slave %d\n", slave->dev_num); - complete(&slave->enumeration_complete); + complete_all(&slave->enumeration_complete); } slave->status = status; mutex_unlock(&bus->bus_lock); @@ -1941,7 +1941,7 @@ int sdw_handle_slave_status(struct sdw_bus *bus, "signaling initialization completion for Slave %d\n", slave->dev_num); - complete(&slave->initialization_complete); + complete_all(&slave->initialization_complete); /* * If the manager became pm_runtime active, the peripherals will be diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c index e3ef5ebae6b7c..027979c66486c 100644 --- a/drivers/soundwire/qcom.c +++ b/drivers/soundwire/qcom.c @@ -437,7 +437,7 @@ static int qcom_swrm_get_alert_slave_dev_num(struct qcom_swrm_ctrl *ctrl) status = (val >> (dev_num * SWRM_MCP_SLV_STATUS_SZ)); if ((status & SWRM_MCP_SLV_STATUS_MASK) == SDW_SLAVE_ALERT) { - ctrl->status[dev_num] = status; + ctrl->status[dev_num] = status & SWRM_MCP_SLV_STATUS_MASK; return dev_num; } } diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c index e03c87f0bfe7a..0fb97a79ad0b3 100644 --- a/drivers/staging/ks7010/ks_wlan_net.c +++ b/drivers/staging/ks7010/ks_wlan_net.c @@ -1583,8 +1583,10 @@ static int ks_wlan_set_encode_ext(struct net_device *dev, commit |= SME_WEP_FLAG; } if (enc->key_len) { - memcpy(&key->key_val[0], &enc->key[0], enc->key_len); - key->key_len = enc->key_len; + int key_len = clamp_val(enc->key_len, 0, IW_ENCODING_TOKEN_MAX); + + memcpy(&key->key_val[0], &enc->key[0], key_len); + key->key_len = key_len; commit |= (SME_WEP_VAL1 << index); } break; diff --git a/drivers/staging/media/atomisp/Kconfig b/drivers/staging/media/atomisp/Kconfig index c9bff98e5309a..e9b168ba97bf1 100644 --- a/drivers/staging/media/atomisp/Kconfig +++ b/drivers/staging/media/atomisp/Kconfig @@ -13,6 +13,7 @@ config VIDEO_ATOMISP tristate "Intel Atom Image Signal Processor Driver" depends on VIDEO_DEV && INTEL_ATOMISP depends on PMIC_OPREGION + select V4L2_FWNODE select IOSF_MBI select VIDEOBUF2_VMALLOC select VIDEO_V4L2_SUBDEV_API diff --git a/drivers/staging/rtl8712/rtl871x_xmit.c b/drivers/staging/rtl8712/rtl871x_xmit.c index 090345bad2230..6353dbe554d3a 100644 --- a/drivers/staging/rtl8712/rtl871x_xmit.c +++ b/drivers/staging/rtl8712/rtl871x_xmit.c @@ -21,6 +21,7 @@ #include "osdep_intf.h" #include "usb_ops.h" +#include #include static const u8 P802_1H_OUI[P80211_OUI_LEN] = {0x00, 0x00, 0xf8}; @@ -55,6 +56,7 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv, sint i; struct xmit_buf *pxmitbuf; struct xmit_frame *pxframe; + int j; memset((unsigned char *)pxmitpriv, 0, sizeof(struct xmit_priv)); spin_lock_init(&pxmitpriv->lock); @@ -117,11 +119,8 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv, _init_queue(&pxmitpriv->pending_xmitbuf_queue); pxmitpriv->pallocated_xmitbuf = kmalloc(NR_XMITBUFF * sizeof(struct xmit_buf) + 4, GFP_ATOMIC); - if (!pxmitpriv->pallocated_xmitbuf) { - kfree(pxmitpriv->pallocated_frame_buf); - pxmitpriv->pallocated_frame_buf = NULL; - return -ENOMEM; - } + if (!pxmitpriv->pallocated_xmitbuf) + goto clean_up_frame_buf; pxmitpriv->pxmitbuf = pxmitpriv->pallocated_xmitbuf + 4 - ((addr_t)(pxmitpriv->pallocated_xmitbuf) & 3); pxmitbuf = (struct xmit_buf *)pxmitpriv->pxmitbuf; @@ -129,13 +128,17 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv, INIT_LIST_HEAD(&pxmitbuf->list); pxmitbuf->pallocated_buf = kmalloc(MAX_XMITBUF_SZ + XMITBUF_ALIGN_SZ, GFP_ATOMIC); - if (!pxmitbuf->pallocated_buf) - return -ENOMEM; + if (!pxmitbuf->pallocated_buf) { + j = 0; + goto clean_up_alloc_buf; + } pxmitbuf->pbuf = pxmitbuf->pallocated_buf + XMITBUF_ALIGN_SZ - ((addr_t) (pxmitbuf->pallocated_buf) & (XMITBUF_ALIGN_SZ - 1)); - if (r8712_xmit_resource_alloc(padapter, pxmitbuf)) - return -ENOMEM; + if (r8712_xmit_resource_alloc(padapter, pxmitbuf)) { + j = 1; + goto clean_up_alloc_buf; + } list_add_tail(&pxmitbuf->list, &(pxmitpriv->free_xmitbuf_queue.queue)); pxmitbuf++; @@ -146,6 +149,28 @@ int _r8712_init_xmit_priv(struct xmit_priv *pxmitpriv, init_hwxmits(pxmitpriv->hwxmits, pxmitpriv->hwxmit_entry); tasklet_setup(&pxmitpriv->xmit_tasklet, r8712_xmit_bh); return 0; + +clean_up_alloc_buf: + if (j) { + /* failure happened in r8712_xmit_resource_alloc() + * delete extra pxmitbuf->pallocated_buf + */ + kfree(pxmitbuf->pallocated_buf); + } + for (j = 0; j < i; j++) { + int k; + + pxmitbuf--; /* reset pointer */ + kfree(pxmitbuf->pallocated_buf); + for (k = 0; k < 8; k++) /* delete xmit urb's */ + usb_free_urb(pxmitbuf->pxmit_urb[k]); + } + kfree(pxmitpriv->pallocated_xmitbuf); + pxmitpriv->pallocated_xmitbuf = NULL; +clean_up_frame_buf: + kfree(pxmitpriv->pallocated_frame_buf); + pxmitpriv->pallocated_frame_buf = NULL; + return -ENOMEM; } void _free_xmit_priv(struct xmit_priv *pxmitpriv) diff --git a/drivers/staging/rtl8712/xmit_linux.c b/drivers/staging/rtl8712/xmit_linux.c index 132afbf49dde9..ceb6b590b310f 100644 --- a/drivers/staging/rtl8712/xmit_linux.c +++ b/drivers/staging/rtl8712/xmit_linux.c @@ -112,6 +112,12 @@ int r8712_xmit_resource_alloc(struct _adapter *padapter, for (i = 0; i < 8; i++) { pxmitbuf->pxmit_urb[i] = usb_alloc_urb(0, GFP_KERNEL); if (!pxmitbuf->pxmit_urb[i]) { + int k; + + for (k = i - 1; k >= 0; k--) { + /* handle allocation errors part way through loop */ + usb_free_urb(pxmitbuf->pxmit_urb[k]); + } netdev_err(padapter->pnetdev, "pxmitbuf->pxmit_urb[i] == NULL\n"); return -ENOMEM; } diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c index 6fb14e5211971..bc07ae1c284cf 100644 --- a/drivers/thermal/thermal_of.c +++ b/drivers/thermal/thermal_of.c @@ -238,17 +238,13 @@ static int thermal_of_monitor_init(struct device_node *np, int *delay, int *pdel return 0; } -static struct thermal_zone_params *thermal_of_parameters_init(struct device_node *np) +static void thermal_of_parameters_init(struct device_node *np, + struct thermal_zone_params *tzp) { - struct thermal_zone_params *tzp; int coef[2]; int ncoef = ARRAY_SIZE(coef); int prop, ret; - tzp = kzalloc(sizeof(*tzp), GFP_KERNEL); - if (!tzp) - return ERR_PTR(-ENOMEM); - tzp->no_hwmon = true; if (!of_property_read_u32(np, "sustainable-power", &prop)) @@ -267,8 +263,6 @@ static struct thermal_zone_params *thermal_of_parameters_init(struct device_node tzp->slope = coef[0]; tzp->offset = coef[1]; - - return tzp; } static struct device_node *thermal_of_zone_get_by_name(struct thermal_zone_device *tz) @@ -442,13 +436,11 @@ static int thermal_of_unbind(struct thermal_zone_device *tz, static void thermal_of_zone_unregister(struct thermal_zone_device *tz) { struct thermal_trip *trips = tz->trips; - struct thermal_zone_params *tzp = tz->tzp; struct thermal_zone_device_ops *ops = tz->ops; thermal_zone_device_disable(tz); thermal_zone_device_unregister(tz); kfree(trips); - kfree(tzp); kfree(ops); } @@ -477,7 +469,7 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node * { struct thermal_zone_device *tz; struct thermal_trip *trips; - struct thermal_zone_params *tzp; + struct thermal_zone_params tzp = {}; struct thermal_zone_device_ops *of_ops; struct device_node *np; int delay, pdelay; @@ -509,12 +501,7 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node * goto out_kfree_trips; } - tzp = thermal_of_parameters_init(np); - if (IS_ERR(tzp)) { - ret = PTR_ERR(tzp); - pr_err("Failed to initialize parameter from %pOFn: %d\n", np, ret); - goto out_kfree_trips; - } + thermal_of_parameters_init(np, &tzp); of_ops->bind = thermal_of_bind; of_ops->unbind = thermal_of_unbind; @@ -522,12 +509,12 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node * mask = GENMASK_ULL((ntrips) - 1, 0); tz = thermal_zone_device_register_with_trips(np->name, trips, ntrips, - mask, data, of_ops, tzp, + mask, data, of_ops, &tzp, pdelay, delay); if (IS_ERR(tz)) { ret = PTR_ERR(tz); pr_err("Failed to register thermal zone %pOFn: %d\n", np, ret); - goto out_kfree_tzp; + goto out_kfree_trips; } ret = thermal_zone_device_enable(tz); @@ -540,8 +527,6 @@ static struct thermal_zone_device *thermal_of_zone_register(struct device_node * return tz; -out_kfree_tzp: - kfree(tzp); out_kfree_trips: kfree(trips); out_kfree_of_ops: diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c index b411a26cc092c..1cdefac4dd1b5 100644 --- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -3070,8 +3070,10 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc) gsm->has_devices = false; } for (i = NUM_DLCI - 1; i >= 0; i--) - if (gsm->dlci[i]) + if (gsm->dlci[i]) { gsm_dlci_release(gsm->dlci[i]); + gsm->dlci[i] = NULL; + } mutex_unlock(&gsm->mutex); /* Now wipe the queues */ tty_ldisc_flush(gsm->tty); diff --git a/drivers/tty/serial/8250/8250_dwlib.c b/drivers/tty/serial/8250/8250_dwlib.c index 75f32f054ebb1..84843e204a5e8 100644 --- a/drivers/tty/serial/8250/8250_dwlib.c +++ b/drivers/tty/serial/8250/8250_dwlib.c @@ -244,7 +244,7 @@ void dw8250_setup_port(struct uart_port *p) struct dw8250_port_data *pd = p->private_data; struct dw8250_data *data = to_dw8250_data(pd); struct uart_8250_port *up = up_to_u8250p(p); - u32 reg; + u32 reg, old_dlf; pd->hw_rs485_support = dw8250_detect_rs485_hw(p); if (pd->hw_rs485_support) { @@ -270,9 +270,11 @@ void dw8250_setup_port(struct uart_port *p) dev_dbg(p->dev, "Designware UART version %c.%c%c\n", (reg >> 24) & 0xff, (reg >> 16) & 0xff, (reg >> 8) & 0xff); + /* Preserve value written by firmware or bootloader */ + old_dlf = dw8250_readl_ext(p, DW_UART_DLF); dw8250_writel_ext(p, DW_UART_DLF, ~0U); reg = dw8250_readl_ext(p, DW_UART_DLF); - dw8250_writel_ext(p, DW_UART_DLF, 0); + dw8250_writel_ext(p, DW_UART_DLF, old_dlf); if (reg) { pd->dlf_size = fls(reg); diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c index 8582479f0211a..22fe5a8ce9399 100644 --- a/drivers/tty/serial/qcom_geni_serial.c +++ b/drivers/tty/serial/qcom_geni_serial.c @@ -1676,13 +1676,6 @@ static int qcom_geni_serial_probe(struct platform_device *pdev) if (ret) return ret; - /* - * Set pm_runtime status as ACTIVE so that wakeup_irq gets - * enabled/disabled from dev_pm_arm_wake_irq during system - * suspend/resume respectively. - */ - pm_runtime_set_active(&pdev->dev); - if (port->wakeup_irq > 0) { device_init_wakeup(&pdev->dev, true); ret = dev_pm_set_dedicated_wake_irq(&pdev->dev, diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 7c9457962a3df..8b7a42e05d6d5 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -590,7 +590,7 @@ static void sci_start_tx(struct uart_port *port) dma_submit_error(s->cookie_tx)) { if (s->cfg->regtype == SCIx_RZ_SCIFA_REGTYPE) /* Switch irq from SCIF to DMA */ - disable_irq(s->irqs[SCIx_TXI_IRQ]); + disable_irq_nosync(s->irqs[SCIx_TXI_IRQ]); s->cookie_tx = 0; schedule_work(&s->work_tx); diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c index 1f565a216e748..a19db49327e29 100644 --- a/drivers/tty/serial/sifive.c +++ b/drivers/tty/serial/sifive.c @@ -811,7 +811,7 @@ static void sifive_serial_console_write(struct console *co, const char *s, local_irq_restore(flags); } -static int __init sifive_serial_console_setup(struct console *co, char *options) +static int sifive_serial_console_setup(struct console *co, char *options) { struct sifive_serial_port *ssp; int baud = SIFIVE_DEFAULT_BAUD_RATE; diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index c84be40fb8dfa..429beccdfeb1a 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -2276,7 +2276,7 @@ static int tiocsti(struct tty_struct *tty, char __user *p) char ch, mbz = 0; struct tty_ldisc *ld; - if (!tty_legacy_tiocsti) + if (!tty_legacy_tiocsti && !capable(CAP_SYS_ADMIN)) return -EIO; if ((current->signal->tty != tty) && !capable(CAP_SYS_ADMIN)) diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c index 1dcadef933e3a..69a44bd7e5d02 100644 --- a/drivers/usb/cdns3/cdns3-gadget.c +++ b/drivers/usb/cdns3/cdns3-gadget.c @@ -3012,12 +3012,14 @@ static int cdns3_gadget_udc_stop(struct usb_gadget *gadget) static int cdns3_gadget_check_config(struct usb_gadget *gadget) { struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget); + struct cdns3_endpoint *priv_ep; struct usb_ep *ep; int n_in = 0; int total; list_for_each_entry(ep, &gadget->ep_list, ep_list) { - if (ep->claimed && (ep->address & USB_DIR_IN)) + priv_ep = ep_to_cdns3_ep(ep); + if ((priv_ep->flags & EP_CLAIMED) && (ep->address & USB_DIR_IN)) n_in++; } diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c index 934b3d997702e..15e9bd180a1d2 100644 --- a/drivers/usb/core/quirks.c +++ b/drivers/usb/core/quirks.c @@ -436,6 +436,10 @@ static const struct usb_device_id usb_quirk_list[] = { /* novation SoundControl XL */ { USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME }, + /* Focusrite Scarlett Solo USB */ + { USB_DEVICE(0x1235, 0x8211), .driver_info = + USB_QUIRK_DISCONNECT_SUSPEND }, + /* Huawei 4G LTE module */ { USB_DEVICE(0x12d1, 0x15bb), .driver_info = USB_QUIRK_DISCONNECT_SUSPEND }, diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index d68958e151a78..99963e724b716 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -277,9 +277,9 @@ int dwc3_core_soft_reset(struct dwc3 *dwc) /* * We're resetting only the device side because, if we're in host mode, * XHCI driver will reset the host block. If dwc3 was configured for - * host-only mode, then we can return early. + * host-only mode or current role is host, then we can return early. */ - if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST) + if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST) return 0; reg = dwc3_readl(dwc->regs, DWC3_DCTL); @@ -1209,22 +1209,6 @@ static int dwc3_core_init(struct dwc3 *dwc) dwc3_writel(dwc->regs, DWC3_GUCTL1, reg); } - if (dwc->dr_mode == USB_DR_MODE_HOST || - dwc->dr_mode == USB_DR_MODE_OTG) { - reg = dwc3_readl(dwc->regs, DWC3_GUCTL); - - /* - * Enable Auto retry Feature to make the controller operating in - * Host mode on seeing transaction errors(CRC errors or internal - * overrun scenerios) on IN transfers to reply to the device - * with a non-terminating retry ACK (i.e, an ACK transcation - * packet with Retry=1 & Nump != 0) - */ - reg |= DWC3_GUCTL_HSTINAUTORETRY; - - dwc3_writel(dwc->regs, DWC3_GUCTL, reg); - } - /* * Must config both number of packets and max burst settings to enable * RX and/or TX threshold. diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 1f043c31a0969..59a31cac30823 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -254,9 +254,6 @@ #define DWC3_GCTL_GBLHIBERNATIONEN BIT(1) #define DWC3_GCTL_DSBLCLKGTNG BIT(0) -/* Global User Control Register */ -#define DWC3_GUCTL_HSTINAUTORETRY BIT(14) - /* Global User Control 1 Register */ #define DWC3_GUCTL1_DEV_DECOUPLE_L1L2_EVT BIT(31) #define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28) diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c index 44a04c9b20735..6604845c397cd 100644 --- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -233,10 +233,12 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc, /* * A lot of BYT devices lack ACPI resource entries for - * the GPIOs, add a fallback mapping to the reference + * the GPIOs. If the ACPI entry for the GPIO controller + * is present add a fallback mapping to the reference * design GPIOs which all boards seem to use. */ - gpiod_add_lookup_table(&platform_bytcr_gpios); + if (acpi_dev_present("INT33FC", NULL, -1)) + gpiod_add_lookup_table(&platform_bytcr_gpios); /* * These GPIOs will turn on the USB2 PHY. Note that we have to diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c index 1b3489149e5ea..dd9b90481b4c2 100644 --- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -1125,6 +1125,10 @@ int usb_add_config(struct usb_composite_dev *cdev, goto done; status = bind(config); + + if (status == 0) + status = usb_gadget_check_config(cdev->gadget); + if (status < 0) { while (!list_empty(&config->functions)) { struct usb_function *f; diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c index 2acece16b8900..e549022642e56 100644 --- a/drivers/usb/gadget/legacy/raw_gadget.c +++ b/drivers/usb/gadget/legacy/raw_gadget.c @@ -310,13 +310,15 @@ static int gadget_bind(struct usb_gadget *gadget, dev->eps_num = i; spin_unlock_irqrestore(&dev->lock, flags); - /* Matches kref_put() in gadget_unbind(). */ - kref_get(&dev->count); - ret = raw_queue_event(dev, USB_RAW_EVENT_CONNECT, 0, NULL); - if (ret < 0) + if (ret < 0) { dev_err(&gadget->dev, "failed to queue event\n"); + set_gadget_data(gadget, NULL); + return ret; + } + /* Matches kref_put() in gadget_unbind(). */ + kref_get(&dev->count); return ret; } diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c index 83fd1de14784f..0068d0c448658 100644 --- a/drivers/usb/gadget/udc/core.c +++ b/drivers/usb/gadget/udc/core.c @@ -878,7 +878,6 @@ int usb_gadget_activate(struct usb_gadget *gadget) */ if (gadget->connected) ret = usb_gadget_connect_locked(gadget); - mutex_unlock(&gadget->udc->connect_lock); unlock: mutex_unlock(&gadget->udc->connect_lock); diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c index 34e9c1df54c79..a0c11f51873e5 100644 --- a/drivers/usb/gadget/udc/tegra-xudc.c +++ b/drivers/usb/gadget/udc/tegra-xudc.c @@ -3718,15 +3718,15 @@ static int tegra_xudc_powerdomain_init(struct tegra_xudc *xudc) int err; xudc->genpd_dev_device = dev_pm_domain_attach_by_name(dev, "dev"); - if (IS_ERR_OR_NULL(xudc->genpd_dev_device)) { - err = PTR_ERR(xudc->genpd_dev_device) ? : -ENODATA; + if (IS_ERR(xudc->genpd_dev_device)) { + err = PTR_ERR(xudc->genpd_dev_device); dev_err(dev, "failed to get device power domain: %d\n", err); return err; } xudc->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "ss"); - if (IS_ERR_OR_NULL(xudc->genpd_dev_ss)) { - err = PTR_ERR(xudc->genpd_dev_ss) ? : -ENODATA; + if (IS_ERR(xudc->genpd_dev_ss)) { + err = PTR_ERR(xudc->genpd_dev_ss); dev_err(dev, "failed to get SuperSpeed power domain: %d\n", err); return err; } diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c index 533537ef3c21d..360680769494b 100644 --- a/drivers/usb/host/ohci-at91.c +++ b/drivers/usb/host/ohci-at91.c @@ -673,7 +673,13 @@ ohci_hcd_at91_drv_resume(struct device *dev) else at91_start_clock(ohci_at91); - ohci_resume(hcd, false); + /* + * According to the comment in ohci_hcd_at91_drv_suspend() + * we need to do a reset if the 48Mhz clock was stopped, + * that is, if ohci_at91->wakeup is clear. Tell ohci_resume() + * to reset in this case by setting its "hibernated" flag. + */ + ohci_resume(hcd, !ohci_at91->wakeup); return 0; } diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c index 90cf40d6d0c31..b60521e1a9a63 100644 --- a/drivers/usb/host/xhci-mtk.c +++ b/drivers/usb/host/xhci-mtk.c @@ -592,6 +592,7 @@ static int xhci_mtk_probe(struct platform_device *pdev) } device_init_wakeup(dev, true); + dma_set_max_seg_size(dev, UINT_MAX); xhci = hcd_to_xhci(hcd); xhci->main_hcd = hcd; diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index a410162e15df1..db9826c38b20b 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -486,10 +486,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == 0x3432) xhci->quirks |= XHCI_BROKEN_STREAMS; - if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) { + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) xhci->quirks |= XHCI_LPM_SUPPORT; - xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS; - } if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) { diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 2bc82b3a2f984..ad70f63593093 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -592,11 +592,8 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci, struct xhci_ring *ep_ring; struct xhci_command *cmd; struct xhci_segment *new_seg; - struct xhci_segment *halted_seg = NULL; union xhci_trb *new_deq; int new_cycle; - union xhci_trb *halted_trb; - int index = 0; dma_addr_t addr; u64 hw_dequeue; bool cycle_found = false; @@ -634,27 +631,7 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci, hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id); new_seg = ep_ring->deq_seg; new_deq = ep_ring->dequeue; - - /* - * Quirk: xHC write-back of the DCS field in the hardware dequeue - * pointer is wrong - use the cycle state of the TRB pointed to by - * the dequeue pointer. - */ - if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS && - !(ep->ep_state & EP_HAS_STREAMS)) - halted_seg = trb_in_td(xhci, td->start_seg, - td->first_trb, td->last_trb, - hw_dequeue & ~0xf, false); - if (halted_seg) { - index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) / - sizeof(*halted_trb); - halted_trb = &halted_seg->trbs[index]; - new_cycle = halted_trb->generic.field[3] & 0x1; - xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n", - (u8)(hw_dequeue & 0x1), index, new_cycle); - } else { - new_cycle = hw_dequeue & 0x1; - } + new_cycle = hw_dequeue & 0x1; /* * We want to find the pointer, segment and cycle state of the new trb diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c index 8a9c7deb7686e..d28fa892c2866 100644 --- a/drivers/usb/host/xhci-tegra.c +++ b/drivers/usb/host/xhci-tegra.c @@ -1145,15 +1145,15 @@ static int tegra_xusb_powerdomain_init(struct device *dev, int err; tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host"); - if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) { - err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA; + if (IS_ERR(tegra->genpd_dev_host)) { + err = PTR_ERR(tegra->genpd_dev_host); dev_err(dev, "failed to get host pm-domain: %d\n", err); return err; } tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss"); - if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) { - err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA; + if (IS_ERR(tegra->genpd_dev_ss)) { + err = PTR_ERR(tegra->genpd_dev_ss); dev_err(dev, "failed to get superspeed pm-domain: %d\n", err); return err; } diff --git a/drivers/usb/misc/ehset.c b/drivers/usb/misc/ehset.c index 986d6589f0535..36b6e9fa7ffb6 100644 --- a/drivers/usb/misc/ehset.c +++ b/drivers/usb/misc/ehset.c @@ -77,7 +77,7 @@ static int ehset_probe(struct usb_interface *intf, switch (test_pid) { case TEST_SE0_NAK_PID: ret = ehset_prepare_port_for_testing(hub_udev, portnum); - if (!ret) + if (ret < 0) break; ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, USB_RT_PORT, USB_PORT_FEAT_TEST, @@ -86,7 +86,7 @@ static int ehset_probe(struct usb_interface *intf, break; case TEST_J_PID: ret = ehset_prepare_port_for_testing(hub_udev, portnum); - if (!ret) + if (ret < 0) break; ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, USB_RT_PORT, USB_PORT_FEAT_TEST, @@ -95,7 +95,7 @@ static int ehset_probe(struct usb_interface *intf, break; case TEST_K_PID: ret = ehset_prepare_port_for_testing(hub_udev, portnum); - if (!ret) + if (ret < 0) break; ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, USB_RT_PORT, USB_PORT_FEAT_TEST, @@ -104,7 +104,7 @@ static int ehset_probe(struct usb_interface *intf, break; case TEST_PACKET_PID: ret = ehset_prepare_port_for_testing(hub_udev, portnum); - if (!ret) + if (ret < 0) break; ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, USB_RT_PORT, USB_PORT_FEAT_TEST, diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 288a96a742661..8ac98e60fff56 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -251,6 +251,7 @@ static void option_instat_callback(struct urb *urb); #define QUECTEL_PRODUCT_EM061K_LTA 0x0123 #define QUECTEL_PRODUCT_EM061K_LMS 0x0124 #define QUECTEL_PRODUCT_EC25 0x0125 +#define QUECTEL_PRODUCT_EM060K_128 0x0128 #define QUECTEL_PRODUCT_EG91 0x0191 #define QUECTEL_PRODUCT_EG95 0x0195 #define QUECTEL_PRODUCT_BG96 0x0296 @@ -268,6 +269,7 @@ static void option_instat_callback(struct urb *urb); #define QUECTEL_PRODUCT_RM520N 0x0801 #define QUECTEL_PRODUCT_EC200U 0x0901 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 +#define QUECTEL_PRODUCT_EC200A 0x6005 #define QUECTEL_PRODUCT_EM061K_LWW 0x6008 #define QUECTEL_PRODUCT_EM061K_LCN 0x6009 #define QUECTEL_PRODUCT_EC200T 0x6026 @@ -1197,6 +1199,9 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) }, @@ -1225,6 +1230,7 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */ .driver_info = ZLP }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200A, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c index 4c6747889a194..24b8772a345e2 100644 --- a/drivers/usb/serial/usb-serial-simple.c +++ b/drivers/usb/serial/usb-serial-simple.c @@ -38,16 +38,6 @@ static struct usb_serial_driver vendor##_device = { \ { USB_DEVICE(0x0a21, 0x8001) } /* MMT-7305WW */ DEVICE(carelink, CARELINK_IDS); -/* ZIO Motherboard USB driver */ -#define ZIO_IDS() \ - { USB_DEVICE(0x1CBE, 0x0103) } -DEVICE(zio, ZIO_IDS); - -/* Funsoft Serial USB driver */ -#define FUNSOFT_IDS() \ - { USB_DEVICE(0x1404, 0xcddc) } -DEVICE(funsoft, FUNSOFT_IDS); - /* Infineon Flashloader driver */ #define FLASHLOADER_IDS() \ { USB_DEVICE_INTERFACE_CLASS(0x058b, 0x0041, USB_CLASS_CDC_DATA) }, \ @@ -55,6 +45,11 @@ DEVICE(funsoft, FUNSOFT_IDS); { USB_DEVICE(0x8087, 0x0801) } DEVICE(flashloader, FLASHLOADER_IDS); +/* Funsoft Serial USB driver */ +#define FUNSOFT_IDS() \ + { USB_DEVICE(0x1404, 0xcddc) } +DEVICE(funsoft, FUNSOFT_IDS); + /* Google Serial USB SubClass */ #define GOOGLE_IDS() \ { USB_VENDOR_AND_INTERFACE_INFO(0x18d1, \ @@ -63,16 +58,21 @@ DEVICE(flashloader, FLASHLOADER_IDS); 0x01) } DEVICE(google, GOOGLE_IDS); +/* HP4x (48/49) Generic Serial driver */ +#define HP4X_IDS() \ + { USB_DEVICE(0x03f0, 0x0121) } +DEVICE(hp4x, HP4X_IDS); + +/* KAUFMANN RKS+CAN VCP */ +#define KAUFMANN_IDS() \ + { USB_DEVICE(0x16d0, 0x0870) } +DEVICE(kaufmann, KAUFMANN_IDS); + /* Libtransistor USB console */ #define LIBTRANSISTOR_IDS() \ { USB_DEVICE(0x1209, 0x8b00) } DEVICE(libtransistor, LIBTRANSISTOR_IDS); -/* ViVOpay USB Serial Driver */ -#define VIVOPAY_IDS() \ - { USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */ -DEVICE(vivopay, VIVOPAY_IDS); - /* Motorola USB Phone driver */ #define MOTO_IDS() \ { USB_DEVICE(0x05c6, 0x3197) }, /* unknown Motorola phone */ \ @@ -101,10 +101,10 @@ DEVICE(nokia, NOKIA_IDS); { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */ DEVICE_N(novatel_gps, NOVATEL_IDS, 3); -/* HP4x (48/49) Generic Serial driver */ -#define HP4X_IDS() \ - { USB_DEVICE(0x03f0, 0x0121) } -DEVICE(hp4x, HP4X_IDS); +/* Siemens USB/MPI adapter */ +#define SIEMENS_IDS() \ + { USB_DEVICE(0x908, 0x0004) } +DEVICE(siemens_mpi, SIEMENS_IDS); /* Suunto ANT+ USB Driver */ #define SUUNTO_IDS() \ @@ -112,45 +112,52 @@ DEVICE(hp4x, HP4X_IDS); { USB_DEVICE(0x0fcf, 0x1009) } /* Dynastream ANT USB-m Stick */ DEVICE(suunto, SUUNTO_IDS); -/* Siemens USB/MPI adapter */ -#define SIEMENS_IDS() \ - { USB_DEVICE(0x908, 0x0004) } -DEVICE(siemens_mpi, SIEMENS_IDS); +/* ViVOpay USB Serial Driver */ +#define VIVOPAY_IDS() \ + { USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */ +DEVICE(vivopay, VIVOPAY_IDS); + +/* ZIO Motherboard USB driver */ +#define ZIO_IDS() \ + { USB_DEVICE(0x1CBE, 0x0103) } +DEVICE(zio, ZIO_IDS); /* All of the above structures mushed into two lists */ static struct usb_serial_driver * const serial_drivers[] = { &carelink_device, - &zio_device, - &funsoft_device, &flashloader_device, + &funsoft_device, &google_device, + &hp4x_device, + &kaufmann_device, &libtransistor_device, - &vivopay_device, &moto_modem_device, &motorola_tetra_device, &nokia_device, &novatel_gps_device, - &hp4x_device, - &suunto_device, &siemens_mpi_device, + &suunto_device, + &vivopay_device, + &zio_device, NULL }; static const struct usb_device_id id_table[] = { CARELINK_IDS(), - ZIO_IDS(), - FUNSOFT_IDS(), FLASHLOADER_IDS(), + FUNSOFT_IDS(), GOOGLE_IDS(), + HP4X_IDS(), + KAUFMANN_IDS(), LIBTRANSISTOR_IDS(), - VIVOPAY_IDS(), MOTO_IDS(), MOTOROLA_TETRA_IDS(), NOKIA_IDS(), NOVATEL_IDS(), - HP4X_IDS(), - SUUNTO_IDS(), SIEMENS_IDS(), + SUUNTO_IDS(), + VIVOPAY_IDS(), + ZIO_IDS(), { }, }; MODULE_DEVICE_TABLE(usb, id_table); diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c index 349cc2030c903..5c6469548b203 100644 --- a/drivers/usb/typec/class.c +++ b/drivers/usb/typec/class.c @@ -1277,8 +1277,7 @@ static ssize_t select_usb_power_delivery_show(struct device *dev, { struct typec_port *port = to_typec_port(dev); struct usb_power_delivery **pds; - struct usb_power_delivery *pd; - int ret = 0; + int i, ret = 0; if (!port->ops || !port->ops->pd_get) return -EOPNOTSUPP; @@ -1287,11 +1286,11 @@ static ssize_t select_usb_power_delivery_show(struct device *dev, if (!pds) return 0; - for (pd = pds[0]; pd; pd++) { - if (pd == port->pd) - ret += sysfs_emit(buf + ret, "[%s] ", dev_name(&pd->dev)); + for (i = 0; pds[i]; i++) { + if (pds[i] == port->pd) + ret += sysfs_emit_at(buf, ret, "[%s] ", dev_name(&pds[i]->dev)); else - ret += sysfs_emit(buf + ret, "%s ", dev_name(&pd->dev)); + ret += sysfs_emit_at(buf, ret, "%s ", dev_name(&pds[i]->dev)); } buf[ret - 1] = '\n'; @@ -2288,6 +2287,8 @@ struct typec_port *typec_register_port(struct device *parent, return ERR_PTR(ret); } + port->pd = cap->pd; + ret = device_add(&port->dev); if (ret) { dev_err(parent, "failed to register port (%d)\n", ret); @@ -2295,7 +2296,7 @@ struct typec_port *typec_register_port(struct device *parent, return ERR_PTR(ret); } - ret = typec_port_set_usb_power_delivery(port, cap->pd); + ret = usb_power_delivery_link_device(port->pd, &port->dev); if (ret) { dev_err(&port->dev, "failed to link pd\n"); device_unregister(&port->dev); diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index e1ec725c2819d..f13c3b76ad1eb 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -498,14 +498,21 @@ static LIST_HEAD(deferred_list); static void gnttab_handle_deferred(struct timer_list *); static DEFINE_TIMER(deferred_timer, gnttab_handle_deferred); +static atomic64_t deferred_count; +static atomic64_t leaked_count; +static unsigned int free_per_iteration = 10; +module_param(free_per_iteration, uint, 0600); + static void gnttab_handle_deferred(struct timer_list *unused) { - unsigned int nr = 10; + unsigned int nr = READ_ONCE(free_per_iteration); + const bool ignore_limit = nr == 0; struct deferred_entry *first = NULL; unsigned long flags; + size_t freed = 0; spin_lock_irqsave(&gnttab_list_lock, flags); - while (nr--) { + while ((ignore_limit || nr--) && !list_empty(&deferred_list)) { struct deferred_entry *entry = list_first_entry(&deferred_list, struct deferred_entry, list); @@ -515,10 +522,14 @@ static void gnttab_handle_deferred(struct timer_list *unused) list_del(&entry->list); spin_unlock_irqrestore(&gnttab_list_lock, flags); if (_gnttab_end_foreign_access_ref(entry->ref)) { + uint64_t ret = atomic64_dec_return(&deferred_count); + put_free_entry(entry->ref); - pr_debug("freeing g.e. %#x (pfn %#lx)\n", - entry->ref, page_to_pfn(entry->page)); + pr_debug("freeing g.e. %#x (pfn %#lx), %llu remaining\n", + entry->ref, page_to_pfn(entry->page), + (unsigned long long)ret); put_page(entry->page); + freed++; kfree(entry); entry = NULL; } else { @@ -530,21 +541,22 @@ static void gnttab_handle_deferred(struct timer_list *unused) spin_lock_irqsave(&gnttab_list_lock, flags); if (entry) list_add_tail(&entry->list, &deferred_list); - else if (list_empty(&deferred_list)) - break; } - if (!list_empty(&deferred_list) && !timer_pending(&deferred_timer)) { + if (list_empty(&deferred_list)) + WARN_ON(atomic64_read(&deferred_count)); + else if (!timer_pending(&deferred_timer)) { deferred_timer.expires = jiffies + HZ; add_timer(&deferred_timer); } spin_unlock_irqrestore(&gnttab_list_lock, flags); + pr_debug("Freed %zu references", freed); } static void gnttab_add_deferred(grant_ref_t ref, struct page *page) { struct deferred_entry *entry; gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL; - const char *what = KERN_WARNING "leaking"; + uint64_t leaked, deferred; entry = kmalloc(sizeof(*entry), gfp); if (!page) { @@ -567,10 +579,16 @@ static void gnttab_add_deferred(grant_ref_t ref, struct page *page) add_timer(&deferred_timer); } spin_unlock_irqrestore(&gnttab_list_lock, flags); - what = KERN_DEBUG "deferring"; + deferred = atomic64_inc_return(&deferred_count); + leaked = atomic64_read(&leaked_count); + pr_debug("deferring g.e. %#x (pfn %#lx) (total deferred %llu, total leaked %llu)\n", + ref, page ? page_to_pfn(page) : -1, deferred, leaked); + } else { + deferred = atomic64_read(&deferred_count); + leaked = atomic64_inc_return(&leaked_count); + pr_warn("leaking g.e. %#x (pfn %#lx) (total deferred %llu, total leaked %llu)\n", + ref, page ? page_to_pfn(page) : -1, deferred, leaked); } - printk("%s g.e. %#x (pfn %#lx)\n", - what, ref, page ? page_to_pfn(page) : -1); } int gnttab_try_end_foreign_access(grant_ref_t ref) diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c index 58b732dcbfb83..639bf628389ba 100644 --- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -811,6 +811,9 @@ static int xenbus_probe_thread(void *unused) static int __init xenbus_probe_initcall(void) { + if (!xen_domain()) + return -ENODEV; + /* * Probe XenBus here in the XS_PV case, and also XS_HVM unless we * need to wait for the platform PCI device to come up or diff --git a/fs/9p/fid.h b/fs/9p/fid.h index 0c51889a60b33..29281b7c38870 100644 --- a/fs/9p/fid.h +++ b/fs/9p/fid.h @@ -46,8 +46,8 @@ static inline struct p9_fid *v9fs_fid_clone(struct dentry *dentry) * NOTE: these are set after open so only reflect 9p client not * underlying file system on server. */ -static inline void v9fs_fid_add_modes(struct p9_fid *fid, int s_flags, - int s_cache, unsigned int f_flags) +static inline void v9fs_fid_add_modes(struct p9_fid *fid, unsigned int s_flags, + unsigned int s_cache, unsigned int f_flags) { if (fid->qid.type != P9_QTFILE) return; @@ -57,7 +57,7 @@ static inline void v9fs_fid_add_modes(struct p9_fid *fid, int s_flags, (s_flags & V9FS_DIRECT_IO) || (f_flags & O_DIRECT)) { fid->mode |= P9L_DIRECT; /* no read or write cache */ } else if ((!(s_cache & CACHE_WRITEBACK)) || - (f_flags & O_DSYNC) | (s_flags & V9FS_SYNC)) { + (f_flags & O_DSYNC) || (s_flags & V9FS_SYNC)) { fid->mode |= P9L_NOWRITECACHE; } } diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h index 06a2514f0d882..698c43dd5dc86 100644 --- a/fs/9p/v9fs.h +++ b/fs/9p/v9fs.h @@ -108,7 +108,7 @@ enum p9_cache_bits { struct v9fs_session_info { /* options */ - unsigned char flags; + unsigned int flags; unsigned char nodev; unsigned short debug; unsigned int afid; diff --git a/fs/9p/vfs_dir.c b/fs/9p/vfs_dir.c index 45b684b7d8d7c..4102759a5cb56 100644 --- a/fs/9p/vfs_dir.c +++ b/fs/9p/vfs_dir.c @@ -208,7 +208,7 @@ int v9fs_dir_release(struct inode *inode, struct file *filp) struct p9_fid *fid; __le32 version; loff_t i_size; - int retval = 0; + int retval = 0, put_err; fid = filp->private_data; p9_debug(P9_DEBUG_VFS, "inode: %p filp: %p fid: %d\n", @@ -221,7 +221,8 @@ int v9fs_dir_release(struct inode *inode, struct file *filp) spin_lock(&inode->i_lock); hlist_del(&fid->ilist); spin_unlock(&inode->i_lock); - retval = p9_fid_put(fid); + put_err = p9_fid_put(fid); + retval = retval < 0 ? retval : put_err; } if ((filp->f_mode & FMODE_WRITE)) { diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c index 6c31b8c8112d9..99cb4f04cbdbb 100644 --- a/fs/9p/vfs_file.c +++ b/fs/9p/vfs_file.c @@ -483,10 +483,7 @@ v9fs_file_mmap(struct file *filp, struct vm_area_struct *vma) p9_debug(P9_DEBUG_MMAP, "filp :%p\n", filp); if (!(v9ses->cache & CACHE_WRITEBACK)) { - p9_debug(P9_DEBUG_CACHE, "(no mmap mode)"); - if (vma->vm_flags & VM_MAYSHARE) - return -ENODEV; - invalidate_inode_pages2(filp->f_mapping); + p9_debug(P9_DEBUG_CACHE, "(read-only mmap mode)"); return generic_file_readonly_mmap(filp, vma); } diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c index ac18c43fadadc..8e21c2faff625 100644 --- a/fs/btrfs/block-rsv.c +++ b/fs/btrfs/block-rsv.c @@ -349,6 +349,11 @@ void btrfs_update_global_block_rsv(struct btrfs_fs_info *fs_info) } read_unlock(&fs_info->global_root_lock); + if (btrfs_fs_compat_ro(fs_info, BLOCK_GROUP_TREE)) { + num_bytes += btrfs_root_used(&fs_info->block_group_root->root_item); + min_items++; + } + /* * But we also want to reserve enough space so we can do the fallback * global reserve for an unlink, which is an additional diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 795b30913c542..9f056ad41df04 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3692,11 +3692,16 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device * For devices supporting discard turn on discard=async automatically, * unless it's already set or disabled. This could be turned off by * nodiscard for the same mount. + * + * The zoned mode piggy backs on the discard functionality for + * resetting a zone. There is no reason to delay the zone reset as it is + * fast enough. So, do not enable async discard for zoned mode. */ if (!(btrfs_test_opt(fs_info, DISCARD_SYNC) || btrfs_test_opt(fs_info, DISCARD_ASYNC) || btrfs_test_opt(fs_info, NODISCARD)) && - fs_info->fs_devices->discardable) { + fs_info->fs_devices->discardable && + !btrfs_is_zoned(fs_info)) { btrfs_set_and_info(fs_info, DISCARD_ASYNC, "auto enabling async discard"); } diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a37a6587efaf0..82b9779deaa88 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -478,6 +478,15 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end, start, end, page_ops, NULL); } +static bool btrfs_verify_page(struct page *page, u64 start) +{ + if (!fsverity_active(page->mapping->host) || + PageError(page) || PageUptodate(page) || + start >= i_size_read(page->mapping->host)) + return true; + return fsverity_verify_page(page); +} + static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) { struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb); @@ -485,16 +494,8 @@ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len) ASSERT(page_offset(page) <= start && start + len <= page_offset(page) + PAGE_SIZE); - if (uptodate) { - if (fsverity_active(page->mapping->host) && - !PageError(page) && - !PageUptodate(page) && - start < i_size_read(page->mapping->host) && - !fsverity_verify_page(page)) { - btrfs_page_set_error(fs_info, page, start, len); - } else { - btrfs_page_set_uptodate(fs_info, page, start, len); - } + if (uptodate && btrfs_verify_page(page, start)) { + btrfs_page_set_uptodate(fs_info, page, start, len); } else { btrfs_page_clear_uptodate(fs_info, page, start, len); btrfs_page_set_error(fs_info, page, start, len); diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 360bf2522a871..2637d6b157ff9 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -1232,12 +1232,23 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info) int ret = 0; /* - * We need to have subvol_sem write locked, to prevent races between - * concurrent tasks trying to disable quotas, because we will unlock - * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes. + * We need to have subvol_sem write locked to prevent races with + * snapshot creation. */ lockdep_assert_held_write(&fs_info->subvol_sem); + /* + * Lock the cleaner mutex to prevent races with concurrent relocation, + * because relocation may be building backrefs for blocks of the quota + * root while we are deleting the root. This is like dropping fs roots + * of deleted snapshots/subvolumes, we need the same protection. + * + * This also prevents races between concurrent tasks trying to disable + * quotas, because we will unlock and relock qgroup_ioctl_lock across + * BTRFS_FS_QUOTA_ENABLED changes. + */ + mutex_lock(&fs_info->cleaner_mutex); + mutex_lock(&fs_info->qgroup_ioctl_lock); if (!fs_info->quota_root) goto out; @@ -1319,6 +1330,7 @@ out: btrfs_end_transaction(trans); else if (trans) ret = btrfs_end_transaction(trans); + mutex_unlock(&fs_info->cleaner_mutex); return ret; } diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 8b6a99b8d7f6d..eaf7511bbc37e 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -828,8 +828,13 @@ btrfs_attach_transaction_barrier(struct btrfs_root *root) trans = start_transaction(root, 0, TRANS_ATTACH, BTRFS_RESERVE_NO_FLUSH, true); - if (trans == ERR_PTR(-ENOENT)) - btrfs_wait_for_commit(root->fs_info, 0); + if (trans == ERR_PTR(-ENOENT)) { + int ret; + + ret = btrfs_wait_for_commit(root->fs_info, 0); + if (ret) + return ERR_PTR(ret); + } return trans; } @@ -933,6 +938,7 @@ int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid) } wait_for_commit(cur_trans, TRANS_STATE_COMPLETED); + ret = cur_trans->aborted; btrfs_put_transaction(cur_trans); out: return ret; diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 39828af4a4e8c..1c56e056afccf 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -804,6 +804,9 @@ int btrfs_check_mountopts_zoned(struct btrfs_fs_info *info) return -EINVAL; } + btrfs_clear_and_info(info, DISCARD_ASYNC, + "zoned: async discard ignored and disabled for zoned mode"); + return 0; } diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c index c47347d2e84e3..9560b7bc6009a 100644 --- a/fs/ceph/metric.c +++ b/fs/ceph/metric.c @@ -208,7 +208,7 @@ static void metric_delayed_work(struct work_struct *work) struct ceph_mds_client *mdsc = container_of(m, struct ceph_mds_client, metric); - if (mdsc->stopping) + if (mdsc->stopping || disable_send_metrics) return; if (!m->session || !check_session_state(m->session)) { diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index fd4d12c58c3b4..3fa5de892d89d 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -4528,6 +4528,37 @@ ext4_mb_check_group_pa(ext4_fsblk_t goal_block, return pa; } +/* + * check if found pa meets EXT4_MB_HINT_GOAL_ONLY + */ +static bool +ext4_mb_pa_goal_check(struct ext4_allocation_context *ac, + struct ext4_prealloc_space *pa) +{ + struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); + ext4_fsblk_t start; + + if (likely(!(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))) + return true; + + /* + * If EXT4_MB_HINT_GOAL_ONLY is set, ac_g_ex will not be adjusted + * in ext4_mb_normalize_request and will keep same with ac_o_ex + * from ext4_mb_initialize_context. Choose ac_g_ex here to keep + * consistent with ext4_mb_find_by_goal. + */ + start = pa->pa_pstart + + (ac->ac_g_ex.fe_logical - pa->pa_lstart); + if (ext4_grp_offs_to_block(ac->ac_sb, &ac->ac_g_ex) != start) + return false; + + if (ac->ac_g_ex.fe_len > pa->pa_len - + EXT4_B2C(sbi, ac->ac_g_ex.fe_logical - pa->pa_lstart)) + return false; + + return true; +} + /* * search goal blocks in preallocated space */ @@ -4538,8 +4569,8 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) int order, i; struct ext4_inode_info *ei = EXT4_I(ac->ac_inode); struct ext4_locality_group *lg; - struct ext4_prealloc_space *tmp_pa, *cpa = NULL; - ext4_lblk_t tmp_pa_start, tmp_pa_end; + struct ext4_prealloc_space *tmp_pa = NULL, *cpa = NULL; + loff_t tmp_pa_end; struct rb_node *iter; ext4_fsblk_t goal_block; @@ -4547,47 +4578,151 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) if (!(ac->ac_flags & EXT4_MB_HINT_DATA)) return false; - /* first, try per-file preallocation */ + /* + * first, try per-file preallocation by searching the inode pa rbtree. + * + * Here, we can't do a direct traversal of the tree because + * ext4_mb_discard_group_preallocation() can paralelly mark the pa + * deleted and that can cause direct traversal to skip some entries. + */ read_lock(&ei->i_prealloc_lock); + + if (RB_EMPTY_ROOT(&ei->i_prealloc_node)) { + goto try_group_pa; + } + + /* + * Step 1: Find a pa with logical start immediately adjacent to the + * original logical start. This could be on the left or right. + * + * (tmp_pa->pa_lstart never changes so we can skip locking for it). + */ for (iter = ei->i_prealloc_node.rb_node; iter; iter = ext4_mb_pa_rb_next_iter(ac->ac_o_ex.fe_logical, - tmp_pa_start, iter)) { + tmp_pa->pa_lstart, iter)) { tmp_pa = rb_entry(iter, struct ext4_prealloc_space, pa_node.inode_node); + } - /* all fields in this condition don't change, - * so we can skip locking for them */ - tmp_pa_start = tmp_pa->pa_lstart; - tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); - - /* original request start doesn't lie in this PA */ - if (ac->ac_o_ex.fe_logical < tmp_pa_start || - ac->ac_o_ex.fe_logical >= tmp_pa_end) - continue; + /* + * Step 2: The adjacent pa might be to the right of logical start, find + * the left adjacent pa. After this step we'd have a valid tmp_pa whose + * logical start is towards the left of original request's logical start + */ + if (tmp_pa->pa_lstart > ac->ac_o_ex.fe_logical) { + struct rb_node *tmp; + tmp = rb_prev(&tmp_pa->pa_node.inode_node); - /* non-extent files can't have physical blocks past 2^32 */ - if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) && - (tmp_pa->pa_pstart + EXT4_C2B(sbi, tmp_pa->pa_len) > - EXT4_MAX_BLOCK_FILE_PHYS)) { + if (tmp) { + tmp_pa = rb_entry(tmp, struct ext4_prealloc_space, + pa_node.inode_node); + } else { /* - * Since PAs don't overlap, we won't find any - * other PA to satisfy this. + * If there is no adjacent pa to the left then finding + * an overlapping pa is not possible hence stop searching + * inode pa tree */ - break; + goto try_group_pa; } + } + + BUG_ON(!(tmp_pa && tmp_pa->pa_lstart <= ac->ac_o_ex.fe_logical)); - /* found preallocated blocks, use them */ + /* + * Step 3: If the left adjacent pa is deleted, keep moving left to find + * the first non deleted adjacent pa. After this step we should have a + * valid tmp_pa which is guaranteed to be non deleted. + */ + for (iter = &tmp_pa->pa_node.inode_node;; iter = rb_prev(iter)) { + if (!iter) { + /* + * no non deleted left adjacent pa, so stop searching + * inode pa tree + */ + goto try_group_pa; + } + tmp_pa = rb_entry(iter, struct ext4_prealloc_space, + pa_node.inode_node); spin_lock(&tmp_pa->pa_lock); - if (tmp_pa->pa_deleted == 0 && tmp_pa->pa_free) { - atomic_inc(&tmp_pa->pa_count); - ext4_mb_use_inode_pa(ac, tmp_pa); + if (tmp_pa->pa_deleted == 0) { + /* + * We will keep holding the pa_lock from + * this point on because we don't want group discard + * to delete this pa underneath us. Since group + * discard is anyways an ENOSPC operation it + * should be okay for it to wait a few more cycles. + */ + break; + } else { spin_unlock(&tmp_pa->pa_lock); - ac->ac_criteria = 10; - read_unlock(&ei->i_prealloc_lock); - return true; } + } + + BUG_ON(!(tmp_pa && tmp_pa->pa_lstart <= ac->ac_o_ex.fe_logical)); + BUG_ON(tmp_pa->pa_deleted == 1); + + /* + * Step 4: We now have the non deleted left adjacent pa. Only this + * pa can possibly satisfy the request hence check if it overlaps + * original logical start and stop searching if it doesn't. + */ + tmp_pa_end = (loff_t)tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len); + + if (ac->ac_o_ex.fe_logical >= tmp_pa_end) { + spin_unlock(&tmp_pa->pa_lock); + goto try_group_pa; + } + + /* non-extent files can't have physical blocks past 2^32 */ + if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) && + (tmp_pa->pa_pstart + EXT4_C2B(sbi, tmp_pa->pa_len) > + EXT4_MAX_BLOCK_FILE_PHYS)) { + /* + * Since PAs don't overlap, we won't find any other PA to + * satisfy this. + */ + spin_unlock(&tmp_pa->pa_lock); + goto try_group_pa; + } + + if (tmp_pa->pa_free && likely(ext4_mb_pa_goal_check(ac, tmp_pa))) { + atomic_inc(&tmp_pa->pa_count); + ext4_mb_use_inode_pa(ac, tmp_pa); spin_unlock(&tmp_pa->pa_lock); + read_unlock(&ei->i_prealloc_lock); + return true; + } else { + /* + * We found a valid overlapping pa but couldn't use it because + * it had no free blocks. This should ideally never happen + * because: + * + * 1. When a new inode pa is added to rbtree it must have + * pa_free > 0 since otherwise we won't actually need + * preallocation. + * + * 2. An inode pa that is in the rbtree can only have it's + * pa_free become zero when another thread calls: + * ext4_mb_new_blocks + * ext4_mb_use_preallocated + * ext4_mb_use_inode_pa + * + * 3. Further, after the above calls make pa_free == 0, we will + * immediately remove it from the rbtree in: + * ext4_mb_new_blocks + * ext4_mb_release_context + * ext4_mb_put_pa + * + * 4. Since the pa_free becoming 0 and pa_free getting removed + * from tree both happen in ext4_mb_new_blocks, which is always + * called with i_data_sem held for data allocations, we can be + * sure that another process will never see a pa in rbtree with + * pa_free == 0. + */ + WARN_ON_ONCE(tmp_pa->pa_free == 0); } + spin_unlock(&tmp_pa->pa_lock); +try_group_pa: read_unlock(&ei->i_prealloc_lock); /* can we use group allocation? */ @@ -4625,7 +4760,6 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) } if (cpa) { ext4_mb_use_group_pa(ac, cpa); - ac->ac_criteria = 20; return true; } return false; @@ -5399,6 +5533,10 @@ static void ext4_mb_show_ac(struct ext4_allocation_context *ac) (unsigned long)ac->ac_b_ex.fe_logical, (int)ac->ac_criteria); mb_debug(sb, "%u found", ac->ac_found); + mb_debug(sb, "used pa: %s, ", ac->ac_pa ? "yes" : "no"); + if (ac->ac_pa) + mb_debug(sb, "pa_type %s\n", ac->ac_pa->pa_type == MB_GROUP_PA ? + "group pa" : "inode pa"); ext4_mb_show_pa(sb); } #else diff --git a/fs/file.c b/fs/file.c index 7893ea161d770..35c62b54c9d65 100644 --- a/fs/file.c +++ b/fs/file.c @@ -1042,10 +1042,8 @@ unsigned long __fdget_pos(unsigned int fd) struct file *file = (struct file *)(v & ~3); if (file && (file->f_mode & FMODE_ATOMIC_POS)) { - if (file_count(file) > 1) { - v |= FDPUT_POS_UNLOCK; - mutex_lock(&file->f_pos_lock); - } + v |= FDPUT_POS_UNLOCK; + mutex_lock(&file->f_pos_lock); } return v; } diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c index 25e3c20eb19f6..c4e0da6db7195 100644 --- a/fs/jbd2/checkpoint.c +++ b/fs/jbd2/checkpoint.c @@ -221,20 +221,6 @@ restart: jh = transaction->t_checkpoint_list; bh = jh2bh(jh); - /* - * The buffer may be writing back, or flushing out in the - * last couple of cycles, or re-adding into a new transaction, - * need to check it again until it's unlocked. - */ - if (buffer_locked(bh)) { - get_bh(bh); - spin_unlock(&journal->j_list_lock); - wait_on_buffer(bh); - /* the journal_head may have gone by now */ - BUFFER_TRACE(bh, "brelse"); - __brelse(bh); - goto retry; - } if (jh->b_transaction != NULL) { transaction_t *t = jh->b_transaction; tid_t tid = t->t_tid; @@ -269,7 +255,22 @@ restart: spin_lock(&journal->j_list_lock); goto restart; } - if (!buffer_dirty(bh)) { + if (!trylock_buffer(bh)) { + /* + * The buffer is locked, it may be writing back, or + * flushing out in the last couple of cycles, or + * re-adding into a new transaction, need to check + * it again until it's unlocked. + */ + get_bh(bh); + spin_unlock(&journal->j_list_lock); + wait_on_buffer(bh); + /* the journal_head may have gone by now */ + BUFFER_TRACE(bh, "brelse"); + __brelse(bh); + goto retry; + } else if (!buffer_dirty(bh)) { + unlock_buffer(bh); BUFFER_TRACE(bh, "remove from checkpoint"); /* * If the transaction was released or the checkpoint @@ -279,6 +280,7 @@ restart: !transaction->t_checkpoint_list) goto out; } else { + unlock_buffer(bh); /* * We are about to write the buffer, it could be * raced by some other transaction shrink or buffer diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 6e61fa3acaf11..3aefbad4cc099 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -6341,8 +6341,6 @@ static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid) if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) || CLOSE_STATEID(stateid)) return status; - if (!same_clid(&stateid->si_opaque.so_clid, &cl->cl_clientid)) - return status; spin_lock(&cl->cl_lock); s = find_stateid_locked(cl, stateid); if (!s) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 03f5963914a14..a0e1463c3fc4d 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -132,7 +132,7 @@ ssize_t read_from_oldmem(struct iov_iter *iter, size_t count, u64 *ppos, bool encrypted) { unsigned long pfn, offset; - size_t nr_bytes; + ssize_t nr_bytes; ssize_t read = 0, tmp; int idx; diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c index 335c078c42fb5..c57ca2050b73f 100644 --- a/fs/smb/client/sess.c +++ b/fs/smb/client/sess.c @@ -1013,6 +1013,7 @@ setup_ntlm_smb3_neg_ret: } +/* See MS-NLMP 2.2.1.3 */ int build_ntlmssp_auth_blob(unsigned char **pbuffer, u16 *buflen, struct cifs_ses *ses, @@ -1047,7 +1048,8 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer, flags = ses->ntlmssp->server_flags | NTLMSSP_REQUEST_TARGET | NTLMSSP_NEGOTIATE_TARGET_INFO | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED; - + /* we only send version information in ntlmssp negotiate, so do not set this flag */ + flags = flags & ~NTLMSSP_NEGOTIATE_VERSION; tmp = *pbuffer + sizeof(AUTHENTICATE_MESSAGE); sec_blob->NegotiateFlags = cpu_to_le32(flags); diff --git a/fs/smb/server/ksmbd_netlink.h b/fs/smb/server/ksmbd_netlink.h index fb8b2d566efb6..b7521e41402e0 100644 --- a/fs/smb/server/ksmbd_netlink.h +++ b/fs/smb/server/ksmbd_netlink.h @@ -352,7 +352,8 @@ enum KSMBD_TREE_CONN_STATUS { #define KSMBD_SHARE_FLAG_STREAMS BIT(11) #define KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS BIT(12) #define KSMBD_SHARE_FLAG_ACL_XATTR BIT(13) -#define KSMBD_SHARE_FLAG_UPDATE BIT(14) +#define KSMBD_SHARE_FLAG_UPDATE BIT(14) +#define KSMBD_SHARE_FLAG_CROSSMNT BIT(15) /* * Tree connect request flags. diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c index 1cc336f512851..d7e5196485604 100644 --- a/fs/smb/server/smb2pdu.c +++ b/fs/smb/server/smb2pdu.c @@ -2467,8 +2467,9 @@ static void smb2_update_xattrs(struct ksmbd_tree_connect *tcon, } } -static int smb2_creat(struct ksmbd_work *work, struct path *path, char *name, - int open_flags, umode_t posix_mode, bool is_dir) +static int smb2_creat(struct ksmbd_work *work, struct path *parent_path, + struct path *path, char *name, int open_flags, + umode_t posix_mode, bool is_dir) { struct ksmbd_tree_connect *tcon = work->tcon; struct ksmbd_share_config *share = tcon->share_conf; @@ -2495,7 +2496,7 @@ static int smb2_creat(struct ksmbd_work *work, struct path *path, char *name, return rc; } - rc = ksmbd_vfs_kern_path_locked(work, name, 0, path, 0); + rc = ksmbd_vfs_kern_path_locked(work, name, 0, parent_path, path, 0); if (rc) { pr_err("cannot get linux path (%s), err = %d\n", name, rc); @@ -2565,7 +2566,7 @@ int smb2_open(struct ksmbd_work *work) struct ksmbd_tree_connect *tcon = work->tcon; struct smb2_create_req *req; struct smb2_create_rsp *rsp; - struct path path; + struct path path, parent_path; struct ksmbd_share_config *share = tcon->share_conf; struct ksmbd_file *fp = NULL; struct file *filp = NULL; @@ -2786,7 +2787,8 @@ int smb2_open(struct ksmbd_work *work) goto err_out1; } - rc = ksmbd_vfs_kern_path_locked(work, name, LOOKUP_NO_SYMLINKS, &path, 1); + rc = ksmbd_vfs_kern_path_locked(work, name, LOOKUP_NO_SYMLINKS, + &parent_path, &path, 1); if (!rc) { file_present = true; @@ -2908,7 +2910,8 @@ int smb2_open(struct ksmbd_work *work) /*create file if not present */ if (!file_present) { - rc = smb2_creat(work, &path, name, open_flags, posix_mode, + rc = smb2_creat(work, &parent_path, &path, name, open_flags, + posix_mode, req->CreateOptions & FILE_DIRECTORY_FILE_LE); if (rc) { if (rc == -ENOENT) { @@ -3323,8 +3326,9 @@ int smb2_open(struct ksmbd_work *work) err_out: if (file_present || created) { - inode_unlock(d_inode(path.dentry->d_parent)); - dput(path.dentry); + inode_unlock(d_inode(parent_path.dentry)); + path_put(&path); + path_put(&parent_path); } ksmbd_revert_fsids(work); err_out1: @@ -5547,7 +5551,7 @@ static int smb2_create_link(struct ksmbd_work *work, struct nls_table *local_nls) { char *link_name = NULL, *target_name = NULL, *pathname = NULL; - struct path path; + struct path path, parent_path; bool file_present = false; int rc; @@ -5577,7 +5581,7 @@ static int smb2_create_link(struct ksmbd_work *work, ksmbd_debug(SMB, "target name is %s\n", target_name); rc = ksmbd_vfs_kern_path_locked(work, link_name, LOOKUP_NO_SYMLINKS, - &path, 0); + &parent_path, &path, 0); if (rc) { if (rc != -ENOENT) goto out; @@ -5607,8 +5611,9 @@ static int smb2_create_link(struct ksmbd_work *work, rc = -EINVAL; out: if (file_present) { - inode_unlock(d_inode(path.dentry->d_parent)); + inode_unlock(d_inode(parent_path.dentry)); path_put(&path); + path_put(&parent_path); } if (!IS_ERR(link_name)) kfree(link_name); diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c index 81489fdedd8e0..911cb3d294b86 100644 --- a/fs/smb/server/vfs.c +++ b/fs/smb/server/vfs.c @@ -63,13 +63,13 @@ int ksmbd_vfs_lock_parent(struct dentry *parent, struct dentry *child) static int ksmbd_vfs_path_lookup_locked(struct ksmbd_share_config *share_conf, char *pathname, unsigned int flags, + struct path *parent_path, struct path *path) { struct qstr last; struct filename *filename; struct path *root_share_path = &share_conf->vfs_path; int err, type; - struct path parent_path; struct dentry *d; if (pathname[0] == '\0') { @@ -84,7 +84,7 @@ static int ksmbd_vfs_path_lookup_locked(struct ksmbd_share_config *share_conf, return PTR_ERR(filename); err = vfs_path_parent_lookup(filename, flags, - &parent_path, &last, &type, + parent_path, &last, &type, root_share_path); if (err) { putname(filename); @@ -92,13 +92,13 @@ static int ksmbd_vfs_path_lookup_locked(struct ksmbd_share_config *share_conf, } if (unlikely(type != LAST_NORM)) { - path_put(&parent_path); + path_put(parent_path); putname(filename); return -ENOENT; } - inode_lock_nested(parent_path.dentry->d_inode, I_MUTEX_PARENT); - d = lookup_one_qstr_excl(&last, parent_path.dentry, 0); + inode_lock_nested(parent_path->dentry->d_inode, I_MUTEX_PARENT); + d = lookup_one_qstr_excl(&last, parent_path->dentry, 0); if (IS_ERR(d)) goto err_out; @@ -108,15 +108,22 @@ static int ksmbd_vfs_path_lookup_locked(struct ksmbd_share_config *share_conf, } path->dentry = d; - path->mnt = share_conf->vfs_path.mnt; - path_put(&parent_path); - putname(filename); + path->mnt = mntget(parent_path->mnt); + + if (test_share_config_flag(share_conf, KSMBD_SHARE_FLAG_CROSSMNT)) { + err = follow_down(path, 0); + if (err < 0) { + path_put(path); + goto err_out; + } + } + putname(filename); return 0; err_out: - inode_unlock(parent_path.dentry->d_inode); - path_put(&parent_path); + inode_unlock(d_inode(parent_path->dentry)); + path_put(parent_path); putname(filename); return -ENOENT; } @@ -1198,14 +1205,14 @@ static int ksmbd_vfs_lookup_in_dir(const struct path *dir, char *name, * Return: 0 on success, otherwise error */ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, - unsigned int flags, struct path *path, - bool caseless) + unsigned int flags, struct path *parent_path, + struct path *path, bool caseless) { struct ksmbd_share_config *share_conf = work->tcon->share_conf; int err; - struct path parent_path; - err = ksmbd_vfs_path_lookup_locked(share_conf, name, flags, path); + err = ksmbd_vfs_path_lookup_locked(share_conf, name, flags, parent_path, + path); if (!err) return err; @@ -1220,10 +1227,10 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, path_len = strlen(filepath); remain_len = path_len; - parent_path = share_conf->vfs_path; - path_get(&parent_path); + *parent_path = share_conf->vfs_path; + path_get(parent_path); - while (d_can_lookup(parent_path.dentry)) { + while (d_can_lookup(parent_path->dentry)) { char *filename = filepath + path_len - remain_len; char *next = strchrnul(filename, '/'); size_t filename_len = next - filename; @@ -1232,7 +1239,7 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, if (filename_len == 0) break; - err = ksmbd_vfs_lookup_in_dir(&parent_path, filename, + err = ksmbd_vfs_lookup_in_dir(parent_path, filename, filename_len, work->conn->um); if (err) @@ -1249,8 +1256,8 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, goto out2; else if (is_last) goto out1; - path_put(&parent_path); - parent_path = *path; + path_put(parent_path); + *parent_path = *path; next[0] = '/'; remain_len -= filename_len + 1; @@ -1258,16 +1265,17 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, err = -EINVAL; out2: - path_put(&parent_path); + path_put(parent_path); out1: kfree(filepath); } if (!err) { - err = ksmbd_vfs_lock_parent(parent_path.dentry, path->dentry); - if (err) - dput(path->dentry); - path_put(&parent_path); + err = ksmbd_vfs_lock_parent(parent_path->dentry, path->dentry); + if (err) { + path_put(path); + path_put(parent_path); + } } return err; } diff --git a/fs/smb/server/vfs.h b/fs/smb/server/vfs.h index 8c0931d4d5310..9df4a2a6776b2 100644 --- a/fs/smb/server/vfs.h +++ b/fs/smb/server/vfs.h @@ -115,8 +115,8 @@ int ksmbd_vfs_xattr_stream_name(char *stream_name, char **xattr_stream_name, int ksmbd_vfs_remove_xattr(struct mnt_idmap *idmap, const struct path *path, char *attr_name); int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, - unsigned int flags, struct path *path, - bool caseless); + unsigned int flags, struct path *parent_path, + struct path *path, bool caseless); struct dentry *ksmbd_vfs_kern_path_create(struct ksmbd_work *work, const char *name, unsigned int flags, diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index d54b595a0fe0f..0d678e9a7b248 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -606,7 +606,7 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline); struct dma_fence *dma_fence_get_stub(void); -struct dma_fence *dma_fence_allocate_private_stub(void); +struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp); u64 dma_fence_context_alloc(unsigned num); extern const struct dma_fence_ops dma_fence_array_ops; diff --git a/include/linux/mm.h b/include/linux/mm.h index 9e10485f37e7f..d1fd7c544dcd8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -641,8 +641,14 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} */ static inline bool vma_start_read(struct vm_area_struct *vma) { - /* Check before locking. A race might cause false locked result. */ - if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) + /* + * Check before locking. A race might cause false locked result. + * We can use READ_ONCE() for the mm_lock_seq here, and don't need + * ACQUIRE semantics, because this is just a lockless check whose result + * we don't rely on for anything - the mm_lock_seq read against which we + * need ordering is below. + */ + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq)) return false; if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) @@ -653,8 +659,13 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * False unlocked result is impossible because we modify and check * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq * modification invalidates all existing locks. + * + * We must use ACQUIRE semantics for the mm_lock_seq so that if we are + * racing with vma_end_write_all(), we only start reading from the VMA + * after it has been unlocked. + * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == smp_load_acquire(&vma->vm_mm->mm_lock_seq))) { up_read(&vma->vm_lock->lock); return false; } @@ -676,7 +687,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) * current task is holding mmap_write_lock, both vma->vm_lock_seq and * mm->mm_lock_seq can't be concurrently modified. */ - *mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq); + *mm_lock_seq = vma->vm_mm->mm_lock_seq; return (vma->vm_lock_seq == *mm_lock_seq); } @@ -688,7 +699,13 @@ static inline void vma_start_write(struct vm_area_struct *vma) return; down_write(&vma->vm_lock->lock); - vma->vm_lock_seq = mm_lock_seq; + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); up_write(&vma->vm_lock->lock); } @@ -702,7 +719,7 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) if (!down_write_trylock(&vma->vm_lock->lock)) return false; - vma->vm_lock_seq = mm_lock_seq; + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); up_write(&vma->vm_lock->lock); return true; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index de10fc797c8e9..5e74ce4a28cd6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -514,6 +514,20 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK + /* + * Can only be written (using WRITE_ONCE()) while holding both: + * - mmap_lock (in write mode) + * - vm_lock->lock (in write mode) + * Can be read reliably while holding one of: + * - mmap_lock (in read or write mode) + * - vm_lock->lock (in read or write mode) + * Can be read unreliably (using READ_ONCE()) for pessimistic bailout + * while holding nothing (except RCU to keep the VMA struct allocated). + * + * This sequence counter is explicitly allowed to overflow; sequence + * counter reuse can only lead to occasional unnecessary use of the + * slowpath. + */ int vm_lock_seq; struct vma_lock *vm_lock; @@ -679,6 +693,20 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + /* + * This field has lock-like semantics, meaning it is sometimes + * accessed with ACQUIRE/RELEASE semantics. + * Roughly speaking, incrementing the sequence number is + * equivalent to releasing locks on VMAs; reading the sequence + * number can be part of taking a read lock on a VMA. + * + * Can be modified under write mmap_lock using RELEASE + * semantics. + * Can be read with no other protection when holding write + * mmap_lock. + * Can be read with ACQUIRE semantics if not holding write + * mmap_lock. + */ int mm_lock_seq; #endif diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index aab8f1b28d262..e05e167dbd166 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -76,8 +76,14 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm) static inline void vma_end_write_all(struct mm_struct *mm) { mmap_assert_write_locked(mm); - /* No races during update due to exclusive mmap_lock being held */ - WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1); + /* + * Nobody can concurrently modify mm->mm_lock_seq due to exclusive + * mmap_lock being held. + * We need RELEASE semantics here to ensure that preceding stores into + * the VMA take effect before we unlock it with this store. + * Pairs with ACQUIRE semantics in vma_start_read(). + */ + smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); } #else static inline void vma_end_write_all(struct mm_struct *mm) {} diff --git a/include/net/ipv6.h b/include/net/ipv6.h index 7332296eca44b..2acc4c808d45d 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -752,12 +752,8 @@ static inline u32 ipv6_addr_hash(const struct in6_addr *a) /* more secured version of ipv6_addr_hash() */ static inline u32 __ipv6_addr_jhash(const struct in6_addr *a, const u32 initval) { - u32 v = (__force u32)a->s6_addr32[0] ^ (__force u32)a->s6_addr32[1]; - - return jhash_3words(v, - (__force u32)a->s6_addr32[2], - (__force u32)a->s6_addr32[3], - initval); + return jhash2((__force const u32 *)a->s6_addr32, + ARRAY_SIZE(a->s6_addr32), initval); } static inline bool ipv6_addr_loopback(const struct in6_addr *a) diff --git a/include/net/vxlan.h b/include/net/vxlan.h index 20bd7d893e10a..b57567296bc67 100644 --- a/include/net/vxlan.h +++ b/include/net/vxlan.h @@ -384,10 +384,15 @@ static inline netdev_features_t vxlan_features_check(struct sk_buff *skb, return features; } -/* IP header + UDP + VXLAN + Ethernet header */ -#define VXLAN_HEADROOM (20 + 8 + 8 + 14) -/* IPv6 header + UDP + VXLAN + Ethernet header */ -#define VXLAN6_HEADROOM (40 + 8 + 8 + 14) +static inline int vxlan_headroom(u32 flags) +{ + /* VXLAN: IP4/6 header + UDP + VXLAN + Ethernet header */ + /* VXLAN-GPE: IP4/6 header + UDP + VXLAN */ + return (flags & VXLAN_F_IPV6 ? sizeof(struct ipv6hdr) : + sizeof(struct iphdr)) + + sizeof(struct udphdr) + sizeof(struct vxlanhdr) + + (flags & VXLAN_F_GPE ? 0 : ETH_HLEN); +} static inline struct vxlanhdr *vxlan_hdr(struct sk_buff *skb) { diff --git a/include/uapi/linux/blkzoned.h b/include/uapi/linux/blkzoned.h index b80fcc9ea5257..f85743ef6e7d1 100644 --- a/include/uapi/linux/blkzoned.h +++ b/include/uapi/linux/blkzoned.h @@ -51,13 +51,13 @@ enum blk_zone_type { * * The Zone Condition state machine in the ZBC/ZAC standards maps the above * deinitions as: - * - ZC1: Empty | BLK_ZONE_EMPTY + * - ZC1: Empty | BLK_ZONE_COND_EMPTY * - ZC2: Implicit Open | BLK_ZONE_COND_IMP_OPEN * - ZC3: Explicit Open | BLK_ZONE_COND_EXP_OPEN - * - ZC4: Closed | BLK_ZONE_CLOSED - * - ZC5: Full | BLK_ZONE_FULL - * - ZC6: Read Only | BLK_ZONE_READONLY - * - ZC7: Offline | BLK_ZONE_OFFLINE + * - ZC4: Closed | BLK_ZONE_COND_CLOSED + * - ZC5: Full | BLK_ZONE_COND_FULL + * - ZC6: Read Only | BLK_ZONE_COND_READONLY + * - ZC7: Offline | BLK_ZONE_COND_OFFLINE * * Conditions 0x5 to 0xC are reserved by the current ZBC/ZAC spec and should * be considered invalid. diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index d6667b435dd39..2989b81cca82a 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2579,11 +2579,20 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx) return 0; } +static bool current_pending_io(void) +{ + struct io_uring_task *tctx = current->io_uring; + + if (!tctx) + return false; + return percpu_counter_read_positive(&tctx->inflight); +} + /* when returns >0, the caller should retry */ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, struct io_wait_queue *iowq) { - int token, ret; + int io_wait, ret; if (unlikely(READ_ONCE(ctx->check_cq))) return 1; @@ -2597,17 +2606,19 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, return 0; /* - * Use io_schedule_prepare/finish, so cpufreq can take into account - * that the task is waiting for IO - turns out to be important for low - * QD IO. + * Mark us as being in io_wait if we have pending requests, so cpufreq + * can take into account that the task is waiting for IO - turns out + * to be important for low QD IO. */ - token = io_schedule_prepare(); + io_wait = current->in_iowait; + if (current_pending_io()) + current->in_iowait = 1; ret = 0; if (iowq->timeout == KTIME_MAX) schedule(); else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS)) ret = -ETIME; - io_schedule_finish(token); + current->in_iowait = io_wait; return ret; } @@ -3859,7 +3870,7 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p, ctx->syscall_iopoll = 1; ctx->compat = in_compat_syscall(); - if (!capable(CAP_IPC_LOCK)) + if (!ns_capable_noaudit(&init_user_ns, CAP_IPC_LOCK)) ctx->user = get_uid(current_user()); /* diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 728f434de2bbf..21db0df0eb000 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -333,21 +333,43 @@ static __always_inline int __waiter_prio(struct task_struct *task) return prio; } +/* + * Update the waiter->tree copy of the sort keys. + */ static __always_inline void waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task) { - waiter->prio = __waiter_prio(task); - waiter->deadline = task->dl.deadline; + lockdep_assert_held(&waiter->lock->wait_lock); + lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry)); + + waiter->tree.prio = __waiter_prio(task); + waiter->tree.deadline = task->dl.deadline; +} + +/* + * Update the waiter->pi_tree copy of the sort keys (from the tree copy). + */ +static __always_inline void +waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task) +{ + lockdep_assert_held(&waiter->lock->wait_lock); + lockdep_assert_held(&task->pi_lock); + lockdep_assert(RB_EMPTY_NODE(&waiter->pi_tree.entry)); + + waiter->pi_tree.prio = waiter->tree.prio; + waiter->pi_tree.deadline = waiter->tree.deadline; } /* - * Only use with rt_mutex_waiter_{less,equal}() + * Only use with rt_waiter_node_{less,equal}() */ +#define task_to_waiter_node(p) \ + &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline } #define task_to_waiter(p) \ - &(struct rt_mutex_waiter){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline } + &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) } -static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left, - struct rt_mutex_waiter *right) +static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left, + struct rt_waiter_node *right) { if (left->prio < right->prio) return 1; @@ -364,8 +386,8 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left, return 0; } -static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left, - struct rt_mutex_waiter *right) +static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left, + struct rt_waiter_node *right) { if (left->prio != right->prio) return 0; @@ -385,7 +407,7 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left, static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter, struct rt_mutex_waiter *top_waiter) { - if (rt_mutex_waiter_less(waiter, top_waiter)) + if (rt_waiter_node_less(&waiter->tree, &top_waiter->tree)) return true; #ifdef RT_MUTEX_BUILD_SPINLOCKS @@ -393,30 +415,30 @@ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter, * Note that RT tasks are excluded from same priority (lateral) * steals to prevent the introduction of an unbounded latency. */ - if (rt_prio(waiter->prio) || dl_prio(waiter->prio)) + if (rt_prio(waiter->tree.prio) || dl_prio(waiter->tree.prio)) return false; - return rt_mutex_waiter_equal(waiter, top_waiter); + return rt_waiter_node_equal(&waiter->tree, &top_waiter->tree); #else return false; #endif } #define __node_2_waiter(node) \ - rb_entry((node), struct rt_mutex_waiter, tree_entry) + rb_entry((node), struct rt_mutex_waiter, tree.entry) static __always_inline bool __waiter_less(struct rb_node *a, const struct rb_node *b) { struct rt_mutex_waiter *aw = __node_2_waiter(a); struct rt_mutex_waiter *bw = __node_2_waiter(b); - if (rt_mutex_waiter_less(aw, bw)) + if (rt_waiter_node_less(&aw->tree, &bw->tree)) return 1; if (!build_ww_mutex()) return 0; - if (rt_mutex_waiter_less(bw, aw)) + if (rt_waiter_node_less(&bw->tree, &aw->tree)) return 0; /* NOTE: relies on waiter->ww_ctx being set before insertion */ @@ -434,48 +456,58 @@ static __always_inline bool __waiter_less(struct rb_node *a, const struct rb_nod static __always_inline void rt_mutex_enqueue(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter) { - rb_add_cached(&waiter->tree_entry, &lock->waiters, __waiter_less); + lockdep_assert_held(&lock->wait_lock); + + rb_add_cached(&waiter->tree.entry, &lock->waiters, __waiter_less); } static __always_inline void rt_mutex_dequeue(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter) { - if (RB_EMPTY_NODE(&waiter->tree_entry)) + lockdep_assert_held(&lock->wait_lock); + + if (RB_EMPTY_NODE(&waiter->tree.entry)) return; - rb_erase_cached(&waiter->tree_entry, &lock->waiters); - RB_CLEAR_NODE(&waiter->tree_entry); + rb_erase_cached(&waiter->tree.entry, &lock->waiters); + RB_CLEAR_NODE(&waiter->tree.entry); } -#define __node_2_pi_waiter(node) \ - rb_entry((node), struct rt_mutex_waiter, pi_tree_entry) +#define __node_2_rt_node(node) \ + rb_entry((node), struct rt_waiter_node, entry) -static __always_inline bool -__pi_waiter_less(struct rb_node *a, const struct rb_node *b) +static __always_inline bool __pi_waiter_less(struct rb_node *a, const struct rb_node *b) { - return rt_mutex_waiter_less(__node_2_pi_waiter(a), __node_2_pi_waiter(b)); + return rt_waiter_node_less(__node_2_rt_node(a), __node_2_rt_node(b)); } static __always_inline void rt_mutex_enqueue_pi(struct task_struct *task, struct rt_mutex_waiter *waiter) { - rb_add_cached(&waiter->pi_tree_entry, &task->pi_waiters, __pi_waiter_less); + lockdep_assert_held(&task->pi_lock); + + rb_add_cached(&waiter->pi_tree.entry, &task->pi_waiters, __pi_waiter_less); } static __always_inline void rt_mutex_dequeue_pi(struct task_struct *task, struct rt_mutex_waiter *waiter) { - if (RB_EMPTY_NODE(&waiter->pi_tree_entry)) + lockdep_assert_held(&task->pi_lock); + + if (RB_EMPTY_NODE(&waiter->pi_tree.entry)) return; - rb_erase_cached(&waiter->pi_tree_entry, &task->pi_waiters); - RB_CLEAR_NODE(&waiter->pi_tree_entry); + rb_erase_cached(&waiter->pi_tree.entry, &task->pi_waiters); + RB_CLEAR_NODE(&waiter->pi_tree.entry); } -static __always_inline void rt_mutex_adjust_prio(struct task_struct *p) +static __always_inline void rt_mutex_adjust_prio(struct rt_mutex_base *lock, + struct task_struct *p) { struct task_struct *pi_task = NULL; + lockdep_assert_held(&lock->wait_lock); + lockdep_assert(rt_mutex_owner(lock) == p); lockdep_assert_held(&p->pi_lock); if (task_has_pi_waiters(p)) @@ -571,9 +603,14 @@ static __always_inline struct rt_mutex_base *task_blocked_on_lock(struct task_st * Chain walk basics and protection scope * * [R] refcount on task - * [P] task->pi_lock held + * [Pn] task->pi_lock held * [L] rtmutex->wait_lock held * + * Normal locking order: + * + * rtmutex->wait_lock + * task->pi_lock + * * Step Description Protected by * function arguments: * @task [R] @@ -588,27 +625,32 @@ static __always_inline struct rt_mutex_base *task_blocked_on_lock(struct task_st * again: * loop_sanity_check(); * retry: - * [1] lock(task->pi_lock); [R] acquire [P] - * [2] waiter = task->pi_blocked_on; [P] - * [3] check_exit_conditions_1(); [P] - * [4] lock = waiter->lock; [P] - * [5] if (!try_lock(lock->wait_lock)) { [P] try to acquire [L] - * unlock(task->pi_lock); release [P] + * [1] lock(task->pi_lock); [R] acquire [P1] + * [2] waiter = task->pi_blocked_on; [P1] + * [3] check_exit_conditions_1(); [P1] + * [4] lock = waiter->lock; [P1] + * [5] if (!try_lock(lock->wait_lock)) { [P1] try to acquire [L] + * unlock(task->pi_lock); release [P1] * goto retry; * } - * [6] check_exit_conditions_2(); [P] + [L] - * [7] requeue_lock_waiter(lock, waiter); [P] + [L] - * [8] unlock(task->pi_lock); release [P] + * [6] check_exit_conditions_2(); [P1] + [L] + * [7] requeue_lock_waiter(lock, waiter); [P1] + [L] + * [8] unlock(task->pi_lock); release [P1] * put_task_struct(task); release [R] * [9] check_exit_conditions_3(); [L] * [10] task = owner(lock); [L] * get_task_struct(task); [L] acquire [R] - * lock(task->pi_lock); [L] acquire [P] - * [11] requeue_pi_waiter(tsk, waiters(lock));[P] + [L] - * [12] check_exit_conditions_4(); [P] + [L] - * [13] unlock(task->pi_lock); release [P] + * lock(task->pi_lock); [L] acquire [P2] + * [11] requeue_pi_waiter(tsk, waiters(lock));[P2] + [L] + * [12] check_exit_conditions_4(); [P2] + [L] + * [13] unlock(task->pi_lock); release [P2] * unlock(lock->wait_lock); release [L] * goto again; + * + * Where P1 is the blocking task and P2 is the lock owner; going up one step + * the owner becomes the next blocked task etc.. + * +* */ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, enum rtmutex_chainwalk chwalk, @@ -756,7 +798,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, * enabled we continue, but stop the requeueing in the chain * walk. */ - if (rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { + if (rt_waiter_node_equal(&waiter->tree, task_to_waiter_node(task))) { if (!detect_deadlock) goto out_unlock_pi; else @@ -764,13 +806,18 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, } /* - * [4] Get the next lock + * [4] Get the next lock; per holding task->pi_lock we can't unblock + * and guarantee @lock's existence. */ lock = waiter->lock; /* * [5] We need to trylock here as we are holding task->pi_lock, * which is the reverse lock order versus the other rtmutex * operations. + * + * Per the above, holding task->pi_lock guarantees lock exists, so + * inverting this lock order is infeasible from a life-time + * perspective. */ if (!raw_spin_trylock(&lock->wait_lock)) { raw_spin_unlock_irq(&task->pi_lock); @@ -874,17 +921,18 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, * or * * DL CBS enforcement advancing the effective deadline. - * - * Even though pi_waiters also uses these fields, and that tree is only - * updated in [11], we can do this here, since we hold [L], which - * serializes all pi_waiters access and rb_erase() does not care about - * the values of the node being removed. */ waiter_update_prio(waiter, task); rt_mutex_enqueue(lock, waiter); - /* [8] Release the task */ + /* + * [8] Release the (blocking) task in preparation for + * taking the owner task in [10]. + * + * Since we hold lock->waiter_lock, task cannot unblock, even if we + * release task->pi_lock. + */ raw_spin_unlock(&task->pi_lock); put_task_struct(task); @@ -908,7 +956,12 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, return 0; } - /* [10] Grab the next task, i.e. the owner of @lock */ + /* + * [10] Grab the next task, i.e. the owner of @lock + * + * Per holding lock->wait_lock and checking for !owner above, there + * must be an owner and it cannot go away. + */ task = get_task_struct(rt_mutex_owner(lock)); raw_spin_lock(&task->pi_lock); @@ -921,8 +974,9 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, * and adjust the priority of the owner. */ rt_mutex_dequeue_pi(task, prerequeue_top_waiter); + waiter_clone_prio(waiter, task); rt_mutex_enqueue_pi(task, waiter); - rt_mutex_adjust_prio(task); + rt_mutex_adjust_prio(lock, task); } else if (prerequeue_top_waiter == waiter) { /* @@ -937,8 +991,9 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, */ rt_mutex_dequeue_pi(task, waiter); waiter = rt_mutex_top_waiter(lock); + waiter_clone_prio(waiter, task); rt_mutex_enqueue_pi(task, waiter); - rt_mutex_adjust_prio(task); + rt_mutex_adjust_prio(lock, task); } else { /* * Nothing changed. No need to do any priority @@ -1154,6 +1209,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, waiter->task = task; waiter->lock = lock; waiter_update_prio(waiter, task); + waiter_clone_prio(waiter, task); /* Get the top priority waiter on the lock */ if (rt_mutex_has_waiters(lock)) @@ -1187,7 +1243,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, rt_mutex_dequeue_pi(owner, top_waiter); rt_mutex_enqueue_pi(owner, waiter); - rt_mutex_adjust_prio(owner); + rt_mutex_adjust_prio(lock, owner); if (owner->pi_blocked_on) chain_walk = 1; } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) { @@ -1234,6 +1290,8 @@ static void __sched mark_wakeup_next_waiter(struct rt_wake_q_head *wqh, { struct rt_mutex_waiter *waiter; + lockdep_assert_held(&lock->wait_lock); + raw_spin_lock(¤t->pi_lock); waiter = rt_mutex_top_waiter(lock); @@ -1246,7 +1304,7 @@ static void __sched mark_wakeup_next_waiter(struct rt_wake_q_head *wqh, * task unblocks. */ rt_mutex_dequeue_pi(current, waiter); - rt_mutex_adjust_prio(current); + rt_mutex_adjust_prio(lock, current); /* * As we are waking up the top waiter, and the waiter stays @@ -1482,7 +1540,7 @@ static void __sched remove_waiter(struct rt_mutex_base *lock, if (rt_mutex_has_waiters(lock)) rt_mutex_enqueue_pi(owner, rt_mutex_top_waiter(lock)); - rt_mutex_adjust_prio(owner); + rt_mutex_adjust_prio(lock, owner); /* Store the lock on which owner is blocked or NULL */ next_lock = task_blocked_on_lock(owner); diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c index cb9fdff76a8a3..a6974d0445930 100644 --- a/kernel/locking/rtmutex_api.c +++ b/kernel/locking/rtmutex_api.c @@ -459,7 +459,7 @@ void __sched rt_mutex_adjust_pi(struct task_struct *task) raw_spin_lock_irqsave(&task->pi_lock, flags); waiter = task->pi_blocked_on; - if (!waiter || rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { + if (!waiter || rt_waiter_node_equal(&waiter->tree, task_to_waiter_node(task))) { raw_spin_unlock_irqrestore(&task->pi_lock, flags); return; } diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h index c47e8361bfb5c..1162e07cdaea1 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -17,27 +17,44 @@ #include #include + +/* + * This is a helper for the struct rt_mutex_waiter below. A waiter goes in two + * separate trees and they need their own copy of the sort keys because of + * different locking requirements. + * + * @entry: rbtree node to enqueue into the waiters tree + * @prio: Priority of the waiter + * @deadline: Deadline of the waiter if applicable + * + * See rt_waiter_node_less() and waiter_*_prio(). + */ +struct rt_waiter_node { + struct rb_node entry; + int prio; + u64 deadline; +}; + /* * This is the control structure for tasks blocked on a rt_mutex, * which is allocated on the kernel stack on of the blocked task. * - * @tree_entry: pi node to enqueue into the mutex waiters tree - * @pi_tree_entry: pi node to enqueue into the mutex owner waiters tree + * @tree: node to enqueue into the mutex waiters tree + * @pi_tree: node to enqueue into the mutex owner waiters tree * @task: task reference to the blocked task * @lock: Pointer to the rt_mutex on which the waiter blocks * @wake_state: Wakeup state to use (TASK_NORMAL or TASK_RTLOCK_WAIT) - * @prio: Priority of the waiter - * @deadline: Deadline of the waiter if applicable * @ww_ctx: WW context pointer + * + * @tree is ordered by @lock->wait_lock + * @pi_tree is ordered by rt_mutex_owner(@lock)->pi_lock */ struct rt_mutex_waiter { - struct rb_node tree_entry; - struct rb_node pi_tree_entry; + struct rt_waiter_node tree; + struct rt_waiter_node pi_tree; struct task_struct *task; struct rt_mutex_base *lock; unsigned int wake_state; - int prio; - u64 deadline; struct ww_acquire_ctx *ww_ctx; }; @@ -105,7 +122,7 @@ static inline bool rt_mutex_waiter_is_top_waiter(struct rt_mutex_base *lock, { struct rb_node *leftmost = rb_first_cached(&lock->waiters); - return rb_entry(leftmost, struct rt_mutex_waiter, tree_entry) == waiter; + return rb_entry(leftmost, struct rt_mutex_waiter, tree.entry) == waiter; } static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base *lock) @@ -113,8 +130,10 @@ static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base * struct rb_node *leftmost = rb_first_cached(&lock->waiters); struct rt_mutex_waiter *w = NULL; + lockdep_assert_held(&lock->wait_lock); + if (leftmost) { - w = rb_entry(leftmost, struct rt_mutex_waiter, tree_entry); + w = rb_entry(leftmost, struct rt_mutex_waiter, tree.entry); BUG_ON(w->lock != lock); } return w; @@ -127,8 +146,10 @@ static inline int task_has_pi_waiters(struct task_struct *p) static inline struct rt_mutex_waiter *task_top_pi_waiter(struct task_struct *p) { + lockdep_assert_held(&p->pi_lock); + return rb_entry(p->pi_waiters.rb_leftmost, struct rt_mutex_waiter, - pi_tree_entry); + pi_tree.entry); } #define RT_MUTEX_HAS_WAITERS 1UL @@ -190,8 +211,8 @@ static inline void debug_rt_mutex_free_waiter(struct rt_mutex_waiter *waiter) static inline void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter) { debug_rt_mutex_init_waiter(waiter); - RB_CLEAR_NODE(&waiter->pi_tree_entry); - RB_CLEAR_NODE(&waiter->tree_entry); + RB_CLEAR_NODE(&waiter->pi_tree.entry); + RB_CLEAR_NODE(&waiter->tree.entry); waiter->wake_state = TASK_NORMAL; waiter->task = NULL; } diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 56f139201f246..3ad2cc4823e59 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -96,25 +96,25 @@ __ww_waiter_first(struct rt_mutex *lock) struct rb_node *n = rb_first(&lock->rtmutex.waiters.rb_root); if (!n) return NULL; - return rb_entry(n, struct rt_mutex_waiter, tree_entry); + return rb_entry(n, struct rt_mutex_waiter, tree.entry); } static inline struct rt_mutex_waiter * __ww_waiter_next(struct rt_mutex *lock, struct rt_mutex_waiter *w) { - struct rb_node *n = rb_next(&w->tree_entry); + struct rb_node *n = rb_next(&w->tree.entry); if (!n) return NULL; - return rb_entry(n, struct rt_mutex_waiter, tree_entry); + return rb_entry(n, struct rt_mutex_waiter, tree.entry); } static inline struct rt_mutex_waiter * __ww_waiter_prev(struct rt_mutex *lock, struct rt_mutex_waiter *w) { - struct rb_node *n = rb_prev(&w->tree_entry); + struct rb_node *n = rb_prev(&w->tree.entry); if (!n) return NULL; - return rb_entry(n, struct rt_mutex_waiter, tree_entry); + return rb_entry(n, struct rt_mutex_waiter, tree.entry); } static inline struct rt_mutex_waiter * @@ -123,7 +123,7 @@ __ww_waiter_last(struct rt_mutex *lock) struct rb_node *n = rb_last(&lock->rtmutex.waiters.rb_root); if (!n) return NULL; - return rb_entry(n, struct rt_mutex_waiter, tree_entry); + return rb_entry(n, struct rt_mutex_waiter, tree.entry); } static inline void diff --git a/kernel/signal.c b/kernel/signal.c index 2547fa73bde51..1b39cba7dfd38 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -561,6 +561,10 @@ bool unhandled_signal(struct task_struct *tsk, int sig) if (handler != SIG_IGN && handler != SIG_DFL) return false; + /* If dying, we handle all new signals by ignoring them */ + if (fatal_signal_pending(tsk)) + return false; + /* if ptraced, let the tracer determine */ return !tsk->ptrace; } diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 14d8001140c82..99634b29a8b82 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -523,6 +523,8 @@ struct ring_buffer_per_cpu { rb_time_t before_stamp; u64 event_stamp[MAX_NEST]; u64 read_stamp; + /* pages removed since last reset */ + unsigned long pages_removed; /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -558,6 +560,7 @@ struct ring_buffer_iter { struct buffer_page *head_page; struct buffer_page *cache_reader_page; unsigned long cache_read; + unsigned long cache_pages_removed; u64 read_stamp; u64 page_stamp; struct ring_buffer_event *event; @@ -1956,6 +1959,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) to_remove = rb_list_head(to_remove)->next; head_bit |= (unsigned long)to_remove & RB_PAGE_HEAD; } + /* Read iterators need to reset themselves when some pages removed */ + cpu_buffer->pages_removed += nr_removed; next_page = rb_list_head(to_remove)->next; @@ -1977,12 +1982,6 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) cpu_buffer->head_page = list_entry(next_page, struct buffer_page, list); - /* - * change read pointer to make sure any read iterators reset - * themselves - */ - cpu_buffer->read = 0; - /* pages are removed, resume tracing and then free the pages */ atomic_dec(&cpu_buffer->record_disabled); raw_spin_unlock_irq(&cpu_buffer->reader_lock); @@ -4392,6 +4391,7 @@ static void rb_iter_reset(struct ring_buffer_iter *iter) iter->cache_reader_page = iter->head_page; iter->cache_read = cpu_buffer->read; + iter->cache_pages_removed = cpu_buffer->pages_removed; if (iter->head) { iter->read_stamp = cpu_buffer->read_stamp; @@ -4846,12 +4846,13 @@ rb_iter_peek(struct ring_buffer_iter *iter, u64 *ts) buffer = cpu_buffer->buffer; /* - * Check if someone performed a consuming read to - * the buffer. A consuming read invalidates the iterator - * and we need to reset the iterator in this case. + * Check if someone performed a consuming read to the buffer + * or removed some pages from the buffer. In these cases, + * iterator was invalidated and we need to reset it. */ if (unlikely(iter->cache_read != cpu_buffer->read || - iter->cache_reader_page != cpu_buffer->reader_page)) + iter->cache_reader_page != cpu_buffer->reader_page || + iter->cache_pages_removed != cpu_buffer->pages_removed)) rb_iter_reset(iter); again: @@ -5295,6 +5296,7 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->last_overrun = 0; rb_head_page_activate(cpu_buffer); + cpu_buffer->pages_removed = 0; } /* Must have disabled the cpu buffer then done a synchronize_rcu */ diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 57e539d479890..32f39eabc0716 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -611,7 +611,6 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file, { struct trace_event_call *call = file->event_call; struct trace_array *tr = file->tr; - unsigned long file_flags = file->flags; int ret = 0; int disable; @@ -635,6 +634,8 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file, break; disable = file->flags & EVENT_FILE_FL_SOFT_DISABLED; clear_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags); + /* Disable use of trace_buffered_event */ + trace_buffered_event_disable(); } else disable = !(file->flags & EVENT_FILE_FL_SOFT_MODE); @@ -673,6 +674,8 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file, if (atomic_inc_return(&file->sm_ref) > 1) break; set_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags); + /* Enable use of trace_buffered_event */ + trace_buffered_event_enable(); } if (!(file->flags & EVENT_FILE_FL_ENABLED)) { @@ -712,15 +715,6 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file, break; } - /* Enable or disable use of trace_buffered_event */ - if ((file_flags & EVENT_FILE_FL_SOFT_DISABLED) != - (file->flags & EVENT_FILE_FL_SOFT_DISABLED)) { - if (file->flags & EVENT_FILE_FL_SOFT_DISABLED) - trace_buffered_event_enable(); - else - trace_buffered_event_disable(); - } - return ret; } diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index f1db333270e9f..fad668042f3e7 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -30,54 +30,54 @@ #else #define cond_resched() do {} while (0) #endif -static -int mtree_insert_index(struct maple_tree *mt, unsigned long index, gfp_t gfp) +static int __init mtree_insert_index(struct maple_tree *mt, + unsigned long index, gfp_t gfp) { return mtree_insert(mt, index, xa_mk_value(index & LONG_MAX), gfp); } -static void mtree_erase_index(struct maple_tree *mt, unsigned long index) +static void __init mtree_erase_index(struct maple_tree *mt, unsigned long index) { MT_BUG_ON(mt, mtree_erase(mt, index) != xa_mk_value(index & LONG_MAX)); MT_BUG_ON(mt, mtree_load(mt, index) != NULL); } -static int mtree_test_insert(struct maple_tree *mt, unsigned long index, +static int __init mtree_test_insert(struct maple_tree *mt, unsigned long index, void *ptr) { return mtree_insert(mt, index, ptr, GFP_KERNEL); } -static int mtree_test_store_range(struct maple_tree *mt, unsigned long start, - unsigned long end, void *ptr) +static int __init mtree_test_store_range(struct maple_tree *mt, + unsigned long start, unsigned long end, void *ptr) { return mtree_store_range(mt, start, end, ptr, GFP_KERNEL); } -static int mtree_test_store(struct maple_tree *mt, unsigned long start, +static int __init mtree_test_store(struct maple_tree *mt, unsigned long start, void *ptr) { return mtree_test_store_range(mt, start, start, ptr); } -static int mtree_test_insert_range(struct maple_tree *mt, unsigned long start, - unsigned long end, void *ptr) +static int __init mtree_test_insert_range(struct maple_tree *mt, + unsigned long start, unsigned long end, void *ptr) { return mtree_insert_range(mt, start, end, ptr, GFP_KERNEL); } -static void *mtree_test_load(struct maple_tree *mt, unsigned long index) +static void __init *mtree_test_load(struct maple_tree *mt, unsigned long index) { return mtree_load(mt, index); } -static void *mtree_test_erase(struct maple_tree *mt, unsigned long index) +static void __init *mtree_test_erase(struct maple_tree *mt, unsigned long index) { return mtree_erase(mt, index); } #if defined(CONFIG_64BIT) -static noinline void check_mtree_alloc_range(struct maple_tree *mt, +static noinline void __init check_mtree_alloc_range(struct maple_tree *mt, unsigned long start, unsigned long end, unsigned long size, unsigned long expected, int eret, void *ptr) { @@ -94,7 +94,7 @@ static noinline void check_mtree_alloc_range(struct maple_tree *mt, MT_BUG_ON(mt, result != expected); } -static noinline void check_mtree_alloc_rrange(struct maple_tree *mt, +static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt, unsigned long start, unsigned long end, unsigned long size, unsigned long expected, int eret, void *ptr) { @@ -112,8 +112,8 @@ static noinline void check_mtree_alloc_rrange(struct maple_tree *mt, } #endif -static noinline void check_load(struct maple_tree *mt, unsigned long index, - void *ptr) +static noinline void __init check_load(struct maple_tree *mt, + unsigned long index, void *ptr) { void *ret = mtree_test_load(mt, index); @@ -122,7 +122,7 @@ static noinline void check_load(struct maple_tree *mt, unsigned long index, MT_BUG_ON(mt, ret != ptr); } -static noinline void check_store_range(struct maple_tree *mt, +static noinline void __init check_store_range(struct maple_tree *mt, unsigned long start, unsigned long end, void *ptr, int expected) { int ret = -EINVAL; @@ -138,7 +138,7 @@ static noinline void check_store_range(struct maple_tree *mt, check_load(mt, i, ptr); } -static noinline void check_insert_range(struct maple_tree *mt, +static noinline void __init check_insert_range(struct maple_tree *mt, unsigned long start, unsigned long end, void *ptr, int expected) { int ret = -EINVAL; @@ -154,8 +154,8 @@ static noinline void check_insert_range(struct maple_tree *mt, check_load(mt, i, ptr); } -static noinline void check_insert(struct maple_tree *mt, unsigned long index, - void *ptr) +static noinline void __init check_insert(struct maple_tree *mt, + unsigned long index, void *ptr) { int ret = -EINVAL; @@ -163,7 +163,7 @@ static noinline void check_insert(struct maple_tree *mt, unsigned long index, MT_BUG_ON(mt, ret != 0); } -static noinline void check_dup_insert(struct maple_tree *mt, +static noinline void __init check_dup_insert(struct maple_tree *mt, unsigned long index, void *ptr) { int ret = -EINVAL; @@ -173,13 +173,13 @@ static noinline void check_dup_insert(struct maple_tree *mt, } -static noinline -void check_index_load(struct maple_tree *mt, unsigned long index) +static noinline void __init check_index_load(struct maple_tree *mt, + unsigned long index) { return check_load(mt, index, xa_mk_value(index & LONG_MAX)); } -static inline int not_empty(struct maple_node *node) +static inline __init int not_empty(struct maple_node *node) { int i; @@ -194,8 +194,8 @@ static inline int not_empty(struct maple_node *node) } -static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max, - bool verbose) +static noinline void __init check_rev_seq(struct maple_tree *mt, + unsigned long max, bool verbose) { unsigned long i = max, j; @@ -227,7 +227,7 @@ static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max, #endif } -static noinline void check_seq(struct maple_tree *mt, unsigned long max, +static noinline void __init check_seq(struct maple_tree *mt, unsigned long max, bool verbose) { unsigned long i, j; @@ -256,7 +256,7 @@ static noinline void check_seq(struct maple_tree *mt, unsigned long max, #endif } -static noinline void check_lb_not_empty(struct maple_tree *mt) +static noinline void __init check_lb_not_empty(struct maple_tree *mt) { unsigned long i, j; unsigned long huge = 4000UL * 1000 * 1000; @@ -275,13 +275,13 @@ static noinline void check_lb_not_empty(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_lower_bound_split(struct maple_tree *mt) +static noinline void __init check_lower_bound_split(struct maple_tree *mt) { MT_BUG_ON(mt, !mtree_empty(mt)); check_lb_not_empty(mt); } -static noinline void check_upper_bound_split(struct maple_tree *mt) +static noinline void __init check_upper_bound_split(struct maple_tree *mt) { unsigned long i, j; unsigned long huge; @@ -306,7 +306,7 @@ static noinline void check_upper_bound_split(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_mid_split(struct maple_tree *mt) +static noinline void __init check_mid_split(struct maple_tree *mt) { unsigned long huge = 8000UL * 1000 * 1000; @@ -315,7 +315,7 @@ static noinline void check_mid_split(struct maple_tree *mt) check_lb_not_empty(mt); } -static noinline void check_rev_find(struct maple_tree *mt) +static noinline void __init check_rev_find(struct maple_tree *mt) { int i, nr_entries = 200; void *val; @@ -354,7 +354,7 @@ static noinline void check_rev_find(struct maple_tree *mt) rcu_read_unlock(); } -static noinline void check_find(struct maple_tree *mt) +static noinline void __init check_find(struct maple_tree *mt) { unsigned long val = 0; unsigned long count; @@ -571,7 +571,7 @@ static noinline void check_find(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_find_2(struct maple_tree *mt) +static noinline void __init check_find_2(struct maple_tree *mt) { unsigned long i, j; void *entry; @@ -616,7 +616,7 @@ static noinline void check_find_2(struct maple_tree *mt) #if defined(CONFIG_64BIT) -static noinline void check_alloc_rev_range(struct maple_tree *mt) +static noinline void __init check_alloc_rev_range(struct maple_tree *mt) { /* * Generated by: @@ -624,7 +624,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt) * awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}' */ - unsigned long range[] = { + static const unsigned long range[] = { /* Inclusive , Exclusive. */ 0x565234af2000, 0x565234af4000, 0x565234af4000, 0x565234af9000, @@ -652,7 +652,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt) 0x7fff58791000, 0x7fff58793000, }; - unsigned long holes[] = { + static const unsigned long holes[] = { /* * Note: start of hole is INCLUSIVE * end of hole is EXCLUSIVE @@ -672,7 +672,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt) * 4. number that should be returned. * 5. return value */ - unsigned long req_range[] = { + static const unsigned long req_range[] = { 0x565234af9000, /* Min */ 0x7fff58791000, /* Max */ 0x1000, /* Size */ @@ -783,7 +783,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_alloc_range(struct maple_tree *mt) +static noinline void __init check_alloc_range(struct maple_tree *mt) { /* * Generated by: @@ -791,7 +791,7 @@ static noinline void check_alloc_range(struct maple_tree *mt) * awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}' */ - unsigned long range[] = { + static const unsigned long range[] = { /* Inclusive , Exclusive. */ 0x565234af2000, 0x565234af4000, 0x565234af4000, 0x565234af9000, @@ -818,7 +818,7 @@ static noinline void check_alloc_range(struct maple_tree *mt) 0x7fff5878e000, 0x7fff58791000, 0x7fff58791000, 0x7fff58793000, }; - unsigned long holes[] = { + static const unsigned long holes[] = { /* Start of hole, end of hole, size of hole (+1) */ 0x565234afb000, 0x565234afc000, 0x1000, 0x565234afe000, 0x565235def000, 0x12F1000, @@ -833,7 +833,7 @@ static noinline void check_alloc_range(struct maple_tree *mt) * 4. number that should be returned. * 5. return value */ - unsigned long req_range[] = { + static const unsigned long req_range[] = { 0x565234af9000, /* Min */ 0x7fff58791000, /* Max */ 0x1000, /* Size */ @@ -942,10 +942,10 @@ static noinline void check_alloc_range(struct maple_tree *mt) } #endif -static noinline void check_ranges(struct maple_tree *mt) +static noinline void __init check_ranges(struct maple_tree *mt) { int i, val, val2; - unsigned long r[] = { + static const unsigned long r[] = { 10, 15, 20, 25, 17, 22, /* Overlaps previous range. */ @@ -1210,7 +1210,7 @@ static noinline void check_ranges(struct maple_tree *mt) MT_BUG_ON(mt, mt_height(mt) != 4); } -static noinline void check_next_entry(struct maple_tree *mt) +static noinline void __init check_next_entry(struct maple_tree *mt) { void *entry = NULL; unsigned long limit = 30, i = 0; @@ -1234,7 +1234,7 @@ static noinline void check_next_entry(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_prev_entry(struct maple_tree *mt) +static noinline void __init check_prev_entry(struct maple_tree *mt) { unsigned long index = 16; void *value; @@ -1278,7 +1278,7 @@ static noinline void check_prev_entry(struct maple_tree *mt) mas_unlock(&mas); } -static noinline void check_root_expand(struct maple_tree *mt) +static noinline void __init check_root_expand(struct maple_tree *mt) { MA_STATE(mas, mt, 0, 0); void *ptr; @@ -1367,13 +1367,13 @@ static noinline void check_root_expand(struct maple_tree *mt) mas_unlock(&mas); } -static noinline void check_gap_combining(struct maple_tree *mt) +static noinline void __init check_gap_combining(struct maple_tree *mt) { struct maple_enode *mn1, *mn2; void *entry; unsigned long singletons = 100; - unsigned long *seq100; - unsigned long seq100_64[] = { + static const unsigned long *seq100; + static const unsigned long seq100_64[] = { /* 0-5 */ 74, 75, 76, 50, 100, 2, @@ -1387,7 +1387,7 @@ static noinline void check_gap_combining(struct maple_tree *mt) 76, 2, 79, 85, 4, }; - unsigned long seq100_32[] = { + static const unsigned long seq100_32[] = { /* 0-5 */ 61, 62, 63, 50, 100, 2, @@ -1401,11 +1401,11 @@ static noinline void check_gap_combining(struct maple_tree *mt) 76, 2, 79, 85, 4, }; - unsigned long seq2000[] = { + static const unsigned long seq2000[] = { 1152, 1151, 1100, 1200, 2, }; - unsigned long seq400[] = { + static const unsigned long seq400[] = { 286, 318, 256, 260, 266, 270, 275, 280, 290, 398, 286, 310, @@ -1564,7 +1564,7 @@ static noinline void check_gap_combining(struct maple_tree *mt) mt_set_non_kernel(0); mtree_destroy(mt); } -static noinline void check_node_overwrite(struct maple_tree *mt) +static noinline void __init check_node_overwrite(struct maple_tree *mt) { int i, max = 4000; @@ -1577,7 +1577,7 @@ static noinline void check_node_overwrite(struct maple_tree *mt) } #if defined(BENCH_SLOT_STORE) -static noinline void bench_slot_store(struct maple_tree *mt) +static noinline void __init bench_slot_store(struct maple_tree *mt) { int i, brk = 105, max = 1040, brk_start = 100, count = 20000000; @@ -1593,7 +1593,7 @@ static noinline void bench_slot_store(struct maple_tree *mt) #endif #if defined(BENCH_NODE_STORE) -static noinline void bench_node_store(struct maple_tree *mt) +static noinline void __init bench_node_store(struct maple_tree *mt) { int i, overwrite = 76, max = 240, count = 20000000; @@ -1612,7 +1612,7 @@ static noinline void bench_node_store(struct maple_tree *mt) #endif #if defined(BENCH_AWALK) -static noinline void bench_awalk(struct maple_tree *mt) +static noinline void __init bench_awalk(struct maple_tree *mt) { int i, max = 2500, count = 50000000; MA_STATE(mas, mt, 1470, 1470); @@ -1629,7 +1629,7 @@ static noinline void bench_awalk(struct maple_tree *mt) } #endif #if defined(BENCH_WALK) -static noinline void bench_walk(struct maple_tree *mt) +static noinline void __init bench_walk(struct maple_tree *mt) { int i, max = 2500, count = 550000000; MA_STATE(mas, mt, 1470, 1470); @@ -1646,7 +1646,7 @@ static noinline void bench_walk(struct maple_tree *mt) #endif #if defined(BENCH_MT_FOR_EACH) -static noinline void bench_mt_for_each(struct maple_tree *mt) +static noinline void __init bench_mt_for_each(struct maple_tree *mt) { int i, count = 1000000; unsigned long max = 2500, index = 0; @@ -1670,7 +1670,7 @@ static noinline void bench_mt_for_each(struct maple_tree *mt) #endif /* check_forking - simulate the kernel forking sequence with the tree. */ -static noinline void check_forking(struct maple_tree *mt) +static noinline void __init check_forking(struct maple_tree *mt) { struct maple_tree newmt; @@ -1709,7 +1709,7 @@ static noinline void check_forking(struct maple_tree *mt) mtree_destroy(&newmt); } -static noinline void check_iteration(struct maple_tree *mt) +static noinline void __init check_iteration(struct maple_tree *mt) { int i, nr_entries = 125; void *val; @@ -1777,7 +1777,7 @@ static noinline void check_iteration(struct maple_tree *mt) mt_set_non_kernel(0); } -static noinline void check_mas_store_gfp(struct maple_tree *mt) +static noinline void __init check_mas_store_gfp(struct maple_tree *mt) { struct maple_tree newmt; @@ -1810,7 +1810,7 @@ static noinline void check_mas_store_gfp(struct maple_tree *mt) } #if defined(BENCH_FORK) -static noinline void bench_forking(struct maple_tree *mt) +static noinline void __init bench_forking(struct maple_tree *mt) { struct maple_tree newmt; @@ -1852,22 +1852,27 @@ static noinline void bench_forking(struct maple_tree *mt) } #endif -static noinline void next_prev_test(struct maple_tree *mt) +static noinline void __init next_prev_test(struct maple_tree *mt) { int i, nr_entries; void *val; MA_STATE(mas, mt, 0, 0); struct maple_enode *mn; - unsigned long *level2; - unsigned long level2_64[] = {707, 1000, 710, 715, 720, 725}; - unsigned long level2_32[] = {1747, 2000, 1750, 1755, 1760, 1765}; + static const unsigned long *level2; + static const unsigned long level2_64[] = { 707, 1000, 710, 715, 720, + 725}; + static const unsigned long level2_32[] = { 1747, 2000, 1750, 1755, + 1760, 1765}; + unsigned long last_index; if (MAPLE_32BIT) { nr_entries = 500; level2 = level2_32; + last_index = 0x138e; } else { nr_entries = 200; level2 = level2_64; + last_index = 0x7d6; } for (i = 0; i <= nr_entries; i++) @@ -1974,7 +1979,7 @@ static noinline void next_prev_test(struct maple_tree *mt) val = mas_next(&mas, ULONG_MAX); MT_BUG_ON(mt, val != NULL); - MT_BUG_ON(mt, mas.index != ULONG_MAX); + MT_BUG_ON(mt, mas.index != last_index); MT_BUG_ON(mt, mas.last != ULONG_MAX); val = mas_prev(&mas, 0); @@ -2028,7 +2033,7 @@ static noinline void next_prev_test(struct maple_tree *mt) /* Test spanning writes that require balancing right sibling or right cousin */ -static noinline void check_spanning_relatives(struct maple_tree *mt) +static noinline void __init check_spanning_relatives(struct maple_tree *mt) { unsigned long i, nr_entries = 1000; @@ -2041,7 +2046,7 @@ static noinline void check_spanning_relatives(struct maple_tree *mt) mtree_store_range(mt, 9365, 9955, NULL, GFP_KERNEL); } -static noinline void check_fuzzer(struct maple_tree *mt) +static noinline void __init check_fuzzer(struct maple_tree *mt) { /* * 1. Causes a spanning rebalance of a single root node. @@ -2438,7 +2443,7 @@ static noinline void check_fuzzer(struct maple_tree *mt) } /* duplicate the tree with a specific gap */ -static noinline void check_dup_gaps(struct maple_tree *mt, +static noinline void __init check_dup_gaps(struct maple_tree *mt, unsigned long nr_entries, bool zero_start, unsigned long gap) { @@ -2478,7 +2483,7 @@ static noinline void check_dup_gaps(struct maple_tree *mt, } /* Duplicate many sizes of trees. Mainly to test expected entry values */ -static noinline void check_dup(struct maple_tree *mt) +static noinline void __init check_dup(struct maple_tree *mt) { int i; int big_start = 100010; @@ -2566,7 +2571,7 @@ static noinline void check_dup(struct maple_tree *mt) } } -static noinline void check_bnode_min_spanning(struct maple_tree *mt) +static noinline void __init check_bnode_min_spanning(struct maple_tree *mt) { int i = 50; MA_STATE(mas, mt, 0, 0); @@ -2585,7 +2590,7 @@ static noinline void check_bnode_min_spanning(struct maple_tree *mt) mt_set_non_kernel(0); } -static noinline void check_empty_area_window(struct maple_tree *mt) +static noinline void __init check_empty_area_window(struct maple_tree *mt) { unsigned long i, nr_entries = 20; MA_STATE(mas, mt, 0, 0); @@ -2670,7 +2675,7 @@ static noinline void check_empty_area_window(struct maple_tree *mt) rcu_read_unlock(); } -static noinline void check_empty_area_fill(struct maple_tree *mt) +static noinline void __init check_empty_area_fill(struct maple_tree *mt) { const unsigned long max = 0x25D78000; unsigned long size; @@ -2714,11 +2719,11 @@ static noinline void check_empty_area_fill(struct maple_tree *mt) } static DEFINE_MTREE(tree); -static int maple_tree_seed(void) +static int __init maple_tree_seed(void) { - unsigned long set[] = {5015, 5014, 5017, 25, 1000, - 1001, 1002, 1003, 1005, 0, - 5003, 5002}; + unsigned long set[] = { 5015, 5014, 5017, 25, 1000, + 1001, 1002, 1003, 1005, 0, + 5003, 5002}; void *ptr = &set; pr_info("\nTEST STARTING\n\n"); @@ -2988,7 +2993,7 @@ skip: return -EINVAL; } -static void maple_tree_harvest(void) +static void __exit maple_tree_harvest(void) { } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 5b663eca1f293..47e2b545ffcc6 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2490,7 +2490,7 @@ int unpoison_memory(unsigned long pfn) goto unlock_mutex; } - if (!folio_test_hwpoison(folio)) { + if (!PageHWPoison(p)) { unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n", pfn, &unpoison_rs); goto unlock_mutex; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1756389a06094..d524bf8d0e90c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -384,8 +384,10 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) VMA_ITERATOR(vmi, mm, 0); mmap_write_lock(mm); - for_each_vma(vmi, vma) + for_each_vma(vmi, vma) { + vma_start_write(vma); mpol_rebind_policy(vma->vm_policy, new); + } mmap_write_unlock(mm); } @@ -765,6 +767,8 @@ static int vma_replace_policy(struct vm_area_struct *vma, struct mempolicy *old; struct mempolicy *new; + vma_assert_write_locked(vma); + pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n", vma->vm_start, vma->vm_end, vma->vm_pgoff, vma->vm_ops, vma->vm_file, @@ -1313,6 +1317,14 @@ static long do_mbind(unsigned long start, unsigned long len, if (err) goto mpol_out; + /* + * Lock the VMAs before scanning for pages to migrate, to ensure we don't + * miss a concurrently inserted page. + */ + vma_iter_init(&vmi, mm, start); + for_each_vma_range(vmi, vma, end) + vma_start_write(vma); + ret = queue_pages_range(mm, start, end, nmask, flags | MPOL_MF_INVERT, &pagelist); @@ -1538,6 +1550,7 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le break; } + vma_start_write(vma); new->home_node = home_node; err = mbind_range(&vmi, vma, &prev, start, end, new); mpol_put(new); diff --git a/mm/mmap.c b/mm/mmap.c index 5c5a917b261e7..224a9646a7dbd 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -647,6 +647,7 @@ static inline int dup_anon_vma(struct vm_area_struct *dst, * anon pages imported. */ if (src->anon_vma && !dst->anon_vma) { + vma_start_write(dst); dst->anon_vma = src->anon_vma; return anon_vma_clone(dst, src); } diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index cd7b0bf5369ec..5eb4898cccd4c 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -1123,6 +1123,7 @@ bool ceph_addr_is_blank(const struct ceph_entity_addr *addr) return true; } } +EXPORT_SYMBOL(ceph_addr_is_blank); int ceph_addr_port(const struct ceph_entity_addr *addr) { diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 5affca8e2f53a..c63f1d62d60a5 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -2561,12 +2561,18 @@ static void manage_tempaddrs(struct inet6_dev *idev, ipv6_ifa_notify(0, ift); } - if ((create || list_empty(&idev->tempaddr_list)) && - idev->cnf.use_tempaddr > 0) { + /* Also create a temporary address if it's enabled but no temporary + * address currently exists. + * However, we get called with valid_lft == 0, prefered_lft == 0, create == false + * as part of cleanup (ie. deleting the mngtmpaddr). + * We don't want that to result in creating a new temporary ip address. + */ + if (list_empty(&idev->tempaddr_list) && (valid_lft || prefered_lft)) + create = true; + + if (create && idev->cnf.use_tempaddr > 0) { /* When a new public address is created as described * in [ADDRCONF], also create a new temporary address. - * Also create a temporary address if it's enabled but - * no temporary address currently exists. */ read_unlock_bh(&idev->lock); ipv6_create_tempaddr(ifp, false); diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index b069826869d05..f2eeb8a850af2 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -3717,10 +3717,9 @@ static int mptcp_listen(struct socket *sock, int backlog) if (!err) { sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); mptcp_copy_inaddrs(sk, ssock->sk); + mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); } - mptcp_event_pm_listener(ssock->sk, MPTCP_EVENT_LISTENER_CREATED); - unlock: release_sock(sk); return err; diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index ccf0b3d80fd97..da00c411a9cd4 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -3810,8 +3810,6 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info, NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]); return PTR_ERR(chain); } - if (nft_chain_is_bound(chain)) - return -EOPNOTSUPP; } else if (nla[NFTA_RULE_CHAIN_ID]) { chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID], @@ -3824,6 +3822,9 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info, return -EINVAL; } + if (nft_chain_is_bound(chain)) + return -EOPNOTSUPP; + if (nla[NFTA_RULE_HANDLE]) { handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE])); rule = __nft_rule_lookup(chain, handle); diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c index 407d7197f75bb..fccb3cf7749c1 100644 --- a/net/netfilter/nft_immediate.c +++ b/net/netfilter/nft_immediate.c @@ -125,15 +125,27 @@ static void nft_immediate_activate(const struct nft_ctx *ctx, return nft_data_hold(&priv->data, nft_dreg_to_type(priv->dreg)); } +static void nft_immediate_chain_deactivate(const struct nft_ctx *ctx, + struct nft_chain *chain, + enum nft_trans_phase phase) +{ + struct nft_ctx chain_ctx; + struct nft_rule *rule; + + chain_ctx = *ctx; + chain_ctx.chain = chain; + + list_for_each_entry(rule, &chain->rules, list) + nft_rule_expr_deactivate(&chain_ctx, rule, phase); +} + static void nft_immediate_deactivate(const struct nft_ctx *ctx, const struct nft_expr *expr, enum nft_trans_phase phase) { const struct nft_immediate_expr *priv = nft_expr_priv(expr); const struct nft_data *data = &priv->data; - struct nft_ctx chain_ctx; struct nft_chain *chain; - struct nft_rule *rule; if (priv->dreg == NFT_REG_VERDICT) { switch (data->verdict.code) { @@ -143,20 +155,17 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx, if (!nft_chain_binding(chain)) break; - chain_ctx = *ctx; - chain_ctx.chain = chain; - - list_for_each_entry(rule, &chain->rules, list) - nft_rule_expr_deactivate(&chain_ctx, rule, phase); - switch (phase) { case NFT_TRANS_PREPARE_ERROR: nf_tables_unbind_chain(ctx, chain); - fallthrough; + nft_deactivate_next(ctx->net, chain); + break; case NFT_TRANS_PREPARE: + nft_immediate_chain_deactivate(ctx, chain, phase); nft_deactivate_next(ctx->net, chain); break; default: + nft_immediate_chain_deactivate(ctx, chain, phase); nft_chain_del(chain); chain->bound = false; nft_use_dec(&chain->table->use); diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c index 5c05c9b990fba..8d73fffd2d09d 100644 --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -217,29 +217,37 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set, static int nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv, - struct nft_rbtree_elem *rbe) + struct nft_rbtree_elem *rbe, + u8 genmask) { struct nft_set *set = (struct nft_set *)__set; struct rb_node *prev = rb_prev(&rbe->node); - struct nft_rbtree_elem *rbe_prev = NULL; + struct nft_rbtree_elem *rbe_prev; struct nft_set_gc_batch *gcb; gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC); if (!gcb) return -ENOMEM; - /* search for expired end interval coming before this element. */ + /* search for end interval coming before this element. + * end intervals don't carry a timeout extension, they + * are coupled with the interval start element. + */ while (prev) { rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); - if (nft_rbtree_interval_end(rbe_prev)) + if (nft_rbtree_interval_end(rbe_prev) && + nft_set_elem_active(&rbe_prev->ext, genmask)) break; prev = rb_prev(prev); } - if (rbe_prev) { + if (prev) { + rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); + rb_erase(&rbe_prev->node, &priv->root); atomic_dec(&set->nelems); + nft_set_gc_batch_add(gcb, rbe_prev); } rb_erase(&rbe->node, &priv->root); @@ -321,7 +329,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, /* perform garbage collection to avoid bogus overlap reports. */ if (nft_set_elem_expired(&rbe->ext)) { - err = nft_rbtree_gc_elem(set, priv, rbe); + err = nft_rbtree_gc_elem(set, priv, rbe, genmask); if (err < 0) return err; diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c index ab69ff7577fc7..793009f445c03 100644 --- a/net/sched/sch_mqprio.c +++ b/net/sched/sch_mqprio.c @@ -290,6 +290,13 @@ static int mqprio_parse_nlattr(struct Qdisc *sch, struct tc_mqprio_qopt *qopt, "Attribute type expected to be TCA_MQPRIO_MIN_RATE64"); return -EINVAL; } + + if (nla_len(attr) != sizeof(u64)) { + NL_SET_ERR_MSG_ATTR(extack, attr, + "Attribute TCA_MQPRIO_MIN_RATE64 expected to have 8 bytes length"); + return -EINVAL; + } + if (i >= qopt->num_tc) break; priv->min_rate[i] = nla_get_u64(attr); @@ -312,6 +319,13 @@ static int mqprio_parse_nlattr(struct Qdisc *sch, struct tc_mqprio_qopt *qopt, "Attribute type expected to be TCA_MQPRIO_MAX_RATE64"); return -EINVAL; } + + if (nla_len(attr) != sizeof(u64)) { + NL_SET_ERR_MSG_ATTR(extack, attr, + "Attribute TCA_MQPRIO_MAX_RATE64 expected to have 8 bytes length"); + return -EINVAL; + } + if (i >= qopt->num_tc) break; priv->max_rate[i] = nla_get_u64(attr); diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c index 577fa5af33ec7..302fd749c4249 100644 --- a/net/tipc/crypto.c +++ b/net/tipc/crypto.c @@ -1960,7 +1960,8 @@ rcv: skb_reset_network_header(*skb); skb_pull(*skb, tipc_ehdr_size(ehdr)); - pskb_trim(*skb, (*skb)->len - aead->authsize); + if (pskb_trim(*skb, (*skb)->len - aead->authsize)) + goto free_skb; /* Validate TIPCv2 message */ if (unlikely(!tipc_msg_validate(skb))) { diff --git a/net/tipc/node.c b/net/tipc/node.c index 5e000fde80676..a9c5b6594889b 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -583,7 +583,7 @@ update: n->capabilities, &n->bc_entry.inputq1, &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) { pr_warn("Broadcast rcv link creation failed, no memory\n"); - kfree(n); + tipc_node_put(n); n = NULL; goto exit; } diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 169572c8ed40f..bcd548e247fc8 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -9502,6 +9502,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x881d, "HP 250 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), @@ -9628,6 +9629,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS), SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x1d1f, "ASUS ROG Strix G17 2023 (G713PV)", ALC287_FIXUP_CS35L41_I2C_2), SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401), SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2), diff --git a/sound/soc/codecs/wm8904.c b/sound/soc/codecs/wm8904.c index 791d8738d1c0e..a05d4dafd3d77 100644 --- a/sound/soc/codecs/wm8904.c +++ b/sound/soc/codecs/wm8904.c @@ -2308,6 +2308,9 @@ static int wm8904_i2c_probe(struct i2c_client *i2c) regmap_update_bits(wm8904->regmap, WM8904_BIAS_CONTROL_0, WM8904_POBCTRL, 0); + /* Fill the cache for the ADC test register */ + regmap_read(wm8904->regmap, WM8904_ADC_TEST_0, &val); + /* Can leave the device powered off until we need it */ regcache_cache_only(wm8904->regmap, true); regulator_bulk_disable(ARRAY_SIZE(wm8904->supplies), wm8904->supplies); diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c index 015c3708aa04e..3fd26f2cdd60f 100644 --- a/sound/soc/fsl/fsl_spdif.c +++ b/sound/soc/fsl/fsl_spdif.c @@ -751,6 +751,8 @@ static int fsl_spdif_trigger(struct snd_pcm_substream *substream, case SNDRV_PCM_TRIGGER_PAUSE_PUSH: regmap_update_bits(regmap, REG_SPDIF_SCR, dmaen, 0); regmap_update_bits(regmap, REG_SPDIF_SIE, intr, 0); + regmap_write(regmap, REG_SPDIF_STL, 0x0); + regmap_write(regmap, REG_SPDIF_STR, 0x0); break; default: return -EINVAL; diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py index 3144f33196be4..35462c7ce48b5 100644 --- a/tools/net/ynl/lib/ynl.py +++ b/tools/net/ynl/lib/ynl.py @@ -405,8 +405,8 @@ class YnlFamily(SpecFamily): def _decode_enum(self, rsp, attr_spec): raw = rsp[attr_spec['name']] enum = self.consts[attr_spec['enum']] - i = attr_spec.get('value-start', 0) if 'enum-as-flags' in attr_spec and attr_spec['enum-as-flags']: + i = 0 value = set() while raw: if raw & 1: @@ -414,7 +414,7 @@ class YnlFamily(SpecFamily): raw >>= 1 i += 1 else: - value = enum.entries_by_val[raw - i].name + value = enum.entries_by_val[raw].name rsp[attr_spec['name']] = value def _decode_binary(self, attr, attr_spec): diff --git a/tools/testing/radix-tree/linux/init.h b/tools/testing/radix-tree/linux/init.h index 1bb0afc213099..81563c3dfce79 100644 --- a/tools/testing/radix-tree/linux/init.h +++ b/tools/testing/radix-tree/linux/init.h @@ -1 +1,2 @@ #define __init +#define __exit diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c index adc5392df4009..67c56e9e92606 100644 --- a/tools/testing/radix-tree/maple.c +++ b/tools/testing/radix-tree/maple.c @@ -14,6 +14,7 @@ #include "test.h" #include #include +#include "linux/init.h" #define module_init(x) #define module_exit(x) @@ -81,7 +82,7 @@ static void check_mas_alloc_node_count(struct ma_state *mas) * check_new_node() - Check the creation of new nodes and error path * verification. */ -static noinline void check_new_node(struct maple_tree *mt) +static noinline void __init check_new_node(struct maple_tree *mt) { struct maple_node *mn, *mn2, *mn3; @@ -455,7 +456,7 @@ static noinline void check_new_node(struct maple_tree *mt) /* * Check erasing including RCU. */ -static noinline void check_erase(struct maple_tree *mt, unsigned long index, +static noinline void __init check_erase(struct maple_tree *mt, unsigned long index, void *ptr) { MT_BUG_ON(mt, mtree_test_erase(mt, index) != ptr); @@ -465,24 +466,24 @@ static noinline void check_erase(struct maple_tree *mt, unsigned long index, #define erase_check_insert(mt, i) check_insert(mt, set[i], entry[i%2]) #define erase_check_erase(mt, i) check_erase(mt, set[i], entry[i%2]) -static noinline void check_erase_testset(struct maple_tree *mt) +static noinline void __init check_erase_testset(struct maple_tree *mt) { - unsigned long set[] = { 5015, 5014, 5017, 25, 1000, - 1001, 1002, 1003, 1005, 0, - 6003, 6002, 6008, 6012, 6015, - 7003, 7002, 7008, 7012, 7015, - 8003, 8002, 8008, 8012, 8015, - 9003, 9002, 9008, 9012, 9015, - 10003, 10002, 10008, 10012, 10015, - 11003, 11002, 11008, 11012, 11015, - 12003, 12002, 12008, 12012, 12015, - 13003, 13002, 13008, 13012, 13015, - 14003, 14002, 14008, 14012, 14015, - 15003, 15002, 15008, 15012, 15015, - }; - - - void *ptr = &set; + static const unsigned long set[] = { 5015, 5014, 5017, 25, 1000, + 1001, 1002, 1003, 1005, 0, + 6003, 6002, 6008, 6012, 6015, + 7003, 7002, 7008, 7012, 7015, + 8003, 8002, 8008, 8012, 8015, + 9003, 9002, 9008, 9012, 9015, + 10003, 10002, 10008, 10012, 10015, + 11003, 11002, 11008, 11012, 11015, + 12003, 12002, 12008, 12012, 12015, + 13003, 13002, 13008, 13012, 13015, + 14003, 14002, 14008, 14012, 14015, + 15003, 15002, 15008, 15012, 15015, + }; + + + void *ptr = &check_erase_testset; void *entry[2] = { ptr, mt }; void *root_node; @@ -739,7 +740,7 @@ static noinline void check_erase_testset(struct maple_tree *mt) int mas_ce2_over_count(struct ma_state *mas_start, struct ma_state *mas_end, void *s_entry, unsigned long s_min, void *e_entry, unsigned long e_max, - unsigned long *set, int i, bool null_entry) + const unsigned long *set, int i, bool null_entry) { int count = 0, span = 0; unsigned long retry = 0; @@ -969,8 +970,8 @@ retry: } #if defined(CONFIG_64BIT) -static noinline void check_erase2_testset(struct maple_tree *mt, - unsigned long *set, unsigned long size) +static noinline void __init check_erase2_testset(struct maple_tree *mt, + const unsigned long *set, unsigned long size) { int entry_count = 0; int check = 0; @@ -1114,11 +1115,11 @@ static noinline void check_erase2_testset(struct maple_tree *mt, /* These tests were pulled from KVM tree modifications which failed. */ -static noinline void check_erase2_sets(struct maple_tree *mt) +static noinline void __init check_erase2_sets(struct maple_tree *mt) { void *entry; unsigned long start = 0; - unsigned long set[] = { + static const unsigned long set[] = { STORE, 140737488347136, 140737488351231, STORE, 140721266458624, 140737488351231, ERASE, 140721266458624, 140737488351231, @@ -1136,7 +1137,7 @@ ERASE, 140253902692352, 140253902864383, STORE, 140253902692352, 140253902696447, STORE, 140253902696448, 140253902864383, }; - unsigned long set2[] = { + static const unsigned long set2[] = { STORE, 140737488347136, 140737488351231, STORE, 140735933583360, 140737488351231, ERASE, 140735933583360, 140737488351231, @@ -1160,7 +1161,7 @@ STORE, 140277094813696, 140277094821887, STORE, 140277094821888, 140277094825983, STORE, 140735933906944, 140735933911039, }; - unsigned long set3[] = { + static const unsigned long set3[] = { STORE, 140737488347136, 140737488351231, STORE, 140735790264320, 140737488351231, ERASE, 140735790264320, 140737488351231, @@ -1203,7 +1204,7 @@ STORE, 47135835840512, 47135835885567, STORE, 47135835885568, 47135835893759, }; - unsigned long set4[] = { + static const unsigned long set4[] = { STORE, 140737488347136, 140737488351231, STORE, 140728251703296, 140737488351231, ERASE, 140728251703296, 140737488351231, @@ -1224,7 +1225,7 @@ ERASE, 47646523277312, 47646523445247, STORE, 47646523277312, 47646523400191, }; - unsigned long set5[] = { + static const unsigned long set5[] = { STORE, 140737488347136, 140737488351231, STORE, 140726874062848, 140737488351231, ERASE, 140726874062848, 140737488351231, @@ -1357,7 +1358,7 @@ STORE, 47884791619584, 47884791623679, STORE, 47884791623680, 47884791627775, }; - unsigned long set6[] = { + static const unsigned long set6[] = { STORE, 140737488347136, 140737488351231, STORE, 140722999021568, 140737488351231, ERASE, 140722999021568, 140737488351231, @@ -1489,7 +1490,7 @@ ERASE, 47430432014336, 47430432022527, STORE, 47430432014336, 47430432018431, STORE, 47430432018432, 47430432022527, }; - unsigned long set7[] = { + static const unsigned long set7[] = { STORE, 140737488347136, 140737488351231, STORE, 140729808330752, 140737488351231, ERASE, 140729808330752, 140737488351231, @@ -1621,7 +1622,7 @@ ERASE, 47439987130368, 47439987138559, STORE, 47439987130368, 47439987134463, STORE, 47439987134464, 47439987138559, }; - unsigned long set8[] = { + static const unsigned long set8[] = { STORE, 140737488347136, 140737488351231, STORE, 140722482974720, 140737488351231, ERASE, 140722482974720, 140737488351231, @@ -1754,7 +1755,7 @@ STORE, 47708488638464, 47708488642559, STORE, 47708488642560, 47708488646655, }; - unsigned long set9[] = { + static const unsigned long set9[] = { STORE, 140737488347136, 140737488351231, STORE, 140736427839488, 140737488351231, ERASE, 140736427839488, 140736427839488, @@ -5620,7 +5621,7 @@ ERASE, 47906195480576, 47906195480576, STORE, 94641242615808, 94641242750975, }; - unsigned long set10[] = { + static const unsigned long set10[] = { STORE, 140737488347136, 140737488351231, STORE, 140736427839488, 140737488351231, ERASE, 140736427839488, 140736427839488, @@ -9484,7 +9485,7 @@ STORE, 139726599680000, 139726599684095, ERASE, 47906195480576, 47906195480576, STORE, 94641242615808, 94641242750975, }; - unsigned long set11[] = { + static const unsigned long set11[] = { STORE, 140737488347136, 140737488351231, STORE, 140732658499584, 140737488351231, ERASE, 140732658499584, 140732658499584, @@ -9510,7 +9511,7 @@ STORE, 140732658565120, 140732658569215, STORE, 140732658552832, 140732658565119, }; - unsigned long set12[] = { /* contains 12 values. */ + static const unsigned long set12[] = { /* contains 12 values. */ STORE, 140737488347136, 140737488351231, STORE, 140732658499584, 140737488351231, ERASE, 140732658499584, 140732658499584, @@ -9537,7 +9538,7 @@ STORE, 140732658552832, 140732658565119, STORE, 140014592741375, 140014592741375, /* contrived */ STORE, 140014592733184, 140014592741376, /* creates first entry retry. */ }; - unsigned long set13[] = { + static const unsigned long set13[] = { STORE, 140373516247040, 140373516251135,/*: ffffa2e7b0e10d80 */ STORE, 140373516251136, 140373516255231,/*: ffffa2e7b1195d80 */ STORE, 140373516255232, 140373516443647,/*: ffffa2e7b0e109c0 */ @@ -9550,7 +9551,7 @@ STORE, 140373518684160, 140373518688254,/*: ffffa2e7b05fec00 */ STORE, 140373518688256, 140373518692351,/*: ffffa2e7bfbdcd80 */ STORE, 140373518692352, 140373518696447,/*: ffffa2e7b0749e40 */ }; - unsigned long set14[] = { + static const unsigned long set14[] = { STORE, 140737488347136, 140737488351231, STORE, 140731667996672, 140737488351231, SNULL, 140731668000767, 140737488351231, @@ -9834,7 +9835,7 @@ SNULL, 139826136543232, 139826136809471, STORE, 139826136809472, 139826136842239, STORE, 139826136543232, 139826136809471, }; - unsigned long set15[] = { + static const unsigned long set15[] = { STORE, 140737488347136, 140737488351231, STORE, 140722061451264, 140737488351231, SNULL, 140722061455359, 140737488351231, @@ -10119,7 +10120,7 @@ STORE, 139906808958976, 139906808991743, STORE, 139906808692736, 139906808958975, }; - unsigned long set16[] = { + static const unsigned long set16[] = { STORE, 94174808662016, 94174809321471, STORE, 94174811414528, 94174811426815, STORE, 94174811426816, 94174811430911, @@ -10330,7 +10331,7 @@ STORE, 139921865613312, 139921865617407, STORE, 139921865547776, 139921865564159, }; - unsigned long set17[] = { + static const unsigned long set17[] = { STORE, 94397057224704, 94397057646591, STORE, 94397057650688, 94397057691647, STORE, 94397057691648, 94397057695743, @@ -10392,7 +10393,7 @@ STORE, 140720477511680, 140720477646847, STORE, 140720478302208, 140720478314495, STORE, 140720478314496, 140720478318591, }; - unsigned long set18[] = { + static const unsigned long set18[] = { STORE, 140737488347136, 140737488351231, STORE, 140724953673728, 140737488351231, SNULL, 140724953677823, 140737488351231, @@ -10425,7 +10426,7 @@ STORE, 140222970597376, 140222970605567, ERASE, 140222970597376, 140222970605567, STORE, 140222970597376, 140222970605567, }; - unsigned long set19[] = { + static const unsigned long set19[] = { STORE, 140737488347136, 140737488351231, STORE, 140725182459904, 140737488351231, SNULL, 140725182463999, 140737488351231, @@ -10694,7 +10695,7 @@ STORE, 140656836775936, 140656836780031, STORE, 140656787476480, 140656791920639, ERASE, 140656774639616, 140656779083775, }; - unsigned long set20[] = { + static const unsigned long set20[] = { STORE, 140737488347136, 140737488351231, STORE, 140735952392192, 140737488351231, SNULL, 140735952396287, 140737488351231, @@ -10850,7 +10851,7 @@ STORE, 140590386819072, 140590386823167, STORE, 140590386823168, 140590386827263, SNULL, 140590376591359, 140590376595455, }; - unsigned long set21[] = { + static const unsigned long set21[] = { STORE, 93874710941696, 93874711363583, STORE, 93874711367680, 93874711408639, STORE, 93874711408640, 93874711412735, @@ -10920,7 +10921,7 @@ ERASE, 140708393312256, 140708393316351, ERASE, 140708393308160, 140708393312255, ERASE, 140708393291776, 140708393308159, }; - unsigned long set22[] = { + static const unsigned long set22[] = { STORE, 93951397134336, 93951397183487, STORE, 93951397183488, 93951397728255, STORE, 93951397728256, 93951397826559, @@ -11047,7 +11048,7 @@ STORE, 140551361253376, 140551361519615, ERASE, 140551361253376, 140551361519615, }; - unsigned long set23[] = { + static const unsigned long set23[] = { STORE, 94014447943680, 94014448156671, STORE, 94014450253824, 94014450257919, STORE, 94014450257920, 94014450266111, @@ -14371,7 +14372,7 @@ SNULL, 140175956627455, 140175985139711, STORE, 140175927242752, 140175956627455, STORE, 140175956627456, 140175985139711, }; - unsigned long set24[] = { + static const unsigned long set24[] = { STORE, 140737488347136, 140737488351231, STORE, 140735281639424, 140737488351231, SNULL, 140735281643519, 140737488351231, @@ -15533,7 +15534,7 @@ ERASE, 139635393024000, 139635401412607, ERASE, 139635384627200, 139635384631295, ERASE, 139635384631296, 139635393019903, }; - unsigned long set25[] = { + static const unsigned long set25[] = { STORE, 140737488347136, 140737488351231, STORE, 140737488343040, 140737488351231, STORE, 140722547441664, 140737488351231, @@ -22321,7 +22322,7 @@ STORE, 140249652703232, 140249682087935, STORE, 140249682087936, 140249710600191, }; - unsigned long set26[] = { + static const unsigned long set26[] = { STORE, 140737488347136, 140737488351231, STORE, 140729464770560, 140737488351231, SNULL, 140729464774655, 140737488351231, @@ -22345,7 +22346,7 @@ ERASE, 140109040951296, 140109040959487, STORE, 140109040955392, 140109040959487, ERASE, 140109040955392, 140109040959487, }; - unsigned long set27[] = { + static const unsigned long set27[] = { STORE, 140737488347136, 140737488351231, STORE, 140726128070656, 140737488351231, SNULL, 140726128074751, 140737488351231, @@ -22741,7 +22742,7 @@ STORE, 140415509696512, 140415535910911, ERASE, 140415537422336, 140415562588159, STORE, 140415482433536, 140415509696511, }; - unsigned long set28[] = { + static const unsigned long set28[] = { STORE, 140737488347136, 140737488351231, STORE, 140722475622400, 140737488351231, SNULL, 140722475626495, 140737488351231, @@ -22809,7 +22810,7 @@ STORE, 139918413348864, 139918413352959, ERASE, 139918413316096, 139918413344767, STORE, 93865848528896, 93865848664063, }; - unsigned long set29[] = { + static const unsigned long set29[] = { STORE, 140737488347136, 140737488351231, STORE, 140734467944448, 140737488351231, SNULL, 140734467948543, 140737488351231, @@ -23684,7 +23685,7 @@ ERASE, 140143079972864, 140143088361471, ERASE, 140143205793792, 140143205797887, ERASE, 140143205797888, 140143214186495, }; - unsigned long set30[] = { + static const unsigned long set30[] = { STORE, 140737488347136, 140737488351231, STORE, 140733436743680, 140737488351231, SNULL, 140733436747775, 140737488351231, @@ -24566,7 +24567,7 @@ ERASE, 140165225893888, 140165225897983, ERASE, 140165225897984, 140165234286591, ERASE, 140165058105344, 140165058109439, }; - unsigned long set31[] = { + static const unsigned long set31[] = { STORE, 140737488347136, 140737488351231, STORE, 140730890784768, 140737488351231, SNULL, 140730890788863, 140737488351231, @@ -25379,7 +25380,7 @@ ERASE, 140623906590720, 140623914979327, ERASE, 140622950277120, 140622950281215, ERASE, 140622950281216, 140622958669823, }; - unsigned long set32[] = { + static const unsigned long set32[] = { STORE, 140737488347136, 140737488351231, STORE, 140731244212224, 140737488351231, SNULL, 140731244216319, 140737488351231, @@ -26175,7 +26176,7 @@ ERASE, 140400417288192, 140400425676799, ERASE, 140400283066368, 140400283070463, ERASE, 140400283070464, 140400291459071, }; - unsigned long set33[] = { + static const unsigned long set33[] = { STORE, 140737488347136, 140737488351231, STORE, 140734562918400, 140737488351231, SNULL, 140734562922495, 140737488351231, @@ -26317,7 +26318,7 @@ STORE, 140582961786880, 140583003750399, ERASE, 140582961786880, 140583003750399, }; - unsigned long set34[] = { + static const unsigned long set34[] = { STORE, 140737488347136, 140737488351231, STORE, 140731327180800, 140737488351231, SNULL, 140731327184895, 140737488351231, @@ -27198,7 +27199,7 @@ ERASE, 140012522094592, 140012530483199, ERASE, 140012033142784, 140012033146879, ERASE, 140012033146880, 140012041535487, }; - unsigned long set35[] = { + static const unsigned long set35[] = { STORE, 140737488347136, 140737488351231, STORE, 140730536939520, 140737488351231, SNULL, 140730536943615, 140737488351231, @@ -27955,7 +27956,7 @@ ERASE, 140474471936000, 140474480324607, ERASE, 140474396430336, 140474396434431, ERASE, 140474396434432, 140474404823039, }; - unsigned long set36[] = { + static const unsigned long set36[] = { STORE, 140737488347136, 140737488351231, STORE, 140723893125120, 140737488351231, SNULL, 140723893129215, 140737488351231, @@ -28816,7 +28817,7 @@ ERASE, 140121890357248, 140121898745855, ERASE, 140121269587968, 140121269592063, ERASE, 140121269592064, 140121277980671, }; - unsigned long set37[] = { + static const unsigned long set37[] = { STORE, 140737488347136, 140737488351231, STORE, 140722404016128, 140737488351231, SNULL, 140722404020223, 140737488351231, @@ -28942,7 +28943,7 @@ STORE, 139759821246464, 139759888355327, ERASE, 139759821246464, 139759888355327, ERASE, 139759888355328, 139759955464191, }; - unsigned long set38[] = { + static const unsigned long set38[] = { STORE, 140737488347136, 140737488351231, STORE, 140730666221568, 140737488351231, SNULL, 140730666225663, 140737488351231, @@ -29752,7 +29753,7 @@ ERASE, 140613504712704, 140613504716799, ERASE, 140613504716800, 140613513105407, }; - unsigned long set39[] = { + static const unsigned long set39[] = { STORE, 140737488347136, 140737488351231, STORE, 140736271417344, 140737488351231, SNULL, 140736271421439, 140737488351231, @@ -30124,7 +30125,7 @@ STORE, 140325364428800, 140325372821503, STORE, 140325356036096, 140325364428799, SNULL, 140325364432895, 140325372821503, }; - unsigned long set40[] = { + static const unsigned long set40[] = { STORE, 140737488347136, 140737488351231, STORE, 140734309167104, 140737488351231, SNULL, 140734309171199, 140737488351231, @@ -30875,7 +30876,7 @@ ERASE, 140320289300480, 140320289304575, ERASE, 140320289304576, 140320297693183, ERASE, 140320163409920, 140320163414015, }; - unsigned long set41[] = { + static const unsigned long set41[] = { STORE, 140737488347136, 140737488351231, STORE, 140728157171712, 140737488351231, SNULL, 140728157175807, 140737488351231, @@ -31185,7 +31186,7 @@ STORE, 94376135090176, 94376135094271, STORE, 94376135094272, 94376135098367, SNULL, 94376135094272, 94377208836095, }; - unsigned long set42[] = { + static const unsigned long set42[] = { STORE, 314572800, 1388314623, STORE, 1462157312, 1462169599, STORE, 1462169600, 1462185983, @@ -33862,7 +33863,7 @@ SNULL, 3798999040, 3799101439, */ }; - unsigned long set43[] = { + static const unsigned long set43[] = { STORE, 140737488347136, 140737488351231, STORE, 140734187720704, 140737488351231, SNULL, 140734187724800, 140737488351231, @@ -34996,7 +34997,7 @@ void run_check_rcu_slowread(struct maple_tree *mt, struct rcu_test_struct *vals) MT_BUG_ON(mt, !vals->seen_entry3); MT_BUG_ON(mt, !vals->seen_both); } -static noinline void check_rcu_simulated(struct maple_tree *mt) +static noinline void __init check_rcu_simulated(struct maple_tree *mt) { unsigned long i, nr_entries = 1000; unsigned long target = 4320; @@ -35157,7 +35158,7 @@ static noinline void check_rcu_simulated(struct maple_tree *mt) rcu_unregister_thread(); } -static noinline void check_rcu_threaded(struct maple_tree *mt) +static noinline void __init check_rcu_threaded(struct maple_tree *mt) { unsigned long i, nr_entries = 1000; struct rcu_test_struct vals; @@ -35366,7 +35367,7 @@ static void check_dfs_preorder(struct maple_tree *mt) /* End of depth first search tests */ /* Preallocation testing */ -static noinline void check_prealloc(struct maple_tree *mt) +static noinline void __init check_prealloc(struct maple_tree *mt) { unsigned long i, max = 100; unsigned long allocated; @@ -35494,7 +35495,7 @@ static noinline void check_prealloc(struct maple_tree *mt) /* End of preallocation testing */ /* Spanning writes, writes that span nodes and layers of the tree */ -static noinline void check_spanning_write(struct maple_tree *mt) +static noinline void __init check_spanning_write(struct maple_tree *mt) { unsigned long i, max = 5000; MA_STATE(mas, mt, 1200, 2380); @@ -35662,7 +35663,7 @@ static noinline void check_spanning_write(struct maple_tree *mt) /* End of spanning write testing */ /* Writes to a NULL area that are adjacent to other NULLs */ -static noinline void check_null_expand(struct maple_tree *mt) +static noinline void __init check_null_expand(struct maple_tree *mt) { unsigned long i, max = 100; unsigned char data_end; @@ -35723,7 +35724,7 @@ static noinline void check_null_expand(struct maple_tree *mt) /* End of NULL area expansions */ /* Checking for no memory is best done outside the kernel */ -static noinline void check_nomem(struct maple_tree *mt) +static noinline void __init check_nomem(struct maple_tree *mt) { MA_STATE(ms, mt, 1, 1); @@ -35758,7 +35759,7 @@ static noinline void check_nomem(struct maple_tree *mt) mtree_destroy(mt); } -static noinline void check_locky(struct maple_tree *mt) +static noinline void __init check_locky(struct maple_tree *mt) { MA_STATE(ms, mt, 2, 2); MA_STATE(reader, mt, 2, 2); diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh index 0ae8cafde439f..a40c35c90c52b 100755 --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh @@ -156,9 +156,7 @@ check_tools() elif ! iptables -V &> /dev/null; then echo "SKIP: Could not run all tests without iptables tool" exit $ksft_skip - fi - - if ! ip6tables -V &> /dev/null; then + elif ! ip6tables -V &> /dev/null; then echo "SKIP: Could not run all tests without ip6tables tool" exit $ksft_skip fi diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 65f94f592ff88..1f40d6bc17bc3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4047,8 +4047,17 @@ static ssize_t kvm_vcpu_stats_read(struct file *file, char __user *user_buffer, sizeof(vcpu->stat), user_buffer, size, offset); } +static int kvm_vcpu_stats_release(struct inode *inode, struct file *file) +{ + struct kvm_vcpu *vcpu = file->private_data; + + kvm_put_kvm(vcpu->kvm); + return 0; +} + static const struct file_operations kvm_vcpu_stats_fops = { .read = kvm_vcpu_stats_read, + .release = kvm_vcpu_stats_release, .llseek = noop_llseek, }; @@ -4069,6 +4078,9 @@ static int kvm_vcpu_ioctl_get_stats_fd(struct kvm_vcpu *vcpu) put_unused_fd(fd); return PTR_ERR(file); } + + kvm_get_kvm(vcpu->kvm); + file->f_mode |= FMODE_PREAD; fd_install(fd, file); @@ -4712,8 +4724,17 @@ static ssize_t kvm_vm_stats_read(struct file *file, char __user *user_buffer, sizeof(kvm->stat), user_buffer, size, offset); } +static int kvm_vm_stats_release(struct inode *inode, struct file *file) +{ + struct kvm *kvm = file->private_data; + + kvm_put_kvm(kvm); + return 0; +} + static const struct file_operations kvm_vm_stats_fops = { .read = kvm_vm_stats_read, + .release = kvm_vm_stats_release, .llseek = noop_llseek, }; @@ -4732,6 +4753,9 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) put_unused_fd(fd); return PTR_ERR(file); } + + kvm_get_kvm(kvm); + file->f_mode |= FMODE_PREAD; fd_install(fd, file);