From fc8f5aded1cf9f5505c55694b36174621c7ac88c Mon Sep 17 00:00:00 2001 From: Thomas Petazzoni Date: Mon, 12 Nov 2012 17:03:47 +0100 Subject: net: mvmdio: new Marvell MDIO driver This patch adds a separate driver for the MDIO interface of the Marvell Ethernet controllers. There are two reasons to have a separate driver rather than including it inside the MAC driver itself: *) The MDIO interface is shared by all Ethernet ports, so a driver must guarantee non-concurrent accesses to this MDIO interface. The most logical way is to have a separate driver that handles this single MDIO interface, used by all Ethernet ports. *) The MDIO interface is the same between the existing mv643xx_eth driver and the new mvneta driver. Even though it is for now only used by the mvneta driver, it will in the future be used by the mv643xx_eth driver as well. Signed-off-by: Thomas Petazzoni Acked-by: David S. Miller --- .../devicetree/bindings/net/marvell-orion-mdio.txt | 35 ++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/marvell-orion-mdio.txt (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt new file mode 100644 index 000000000000..34e7aafa321c --- /dev/null +++ b/Documentation/devicetree/bindings/net/marvell-orion-mdio.txt @@ -0,0 +1,35 @@ +* Marvell MDIO Ethernet Controller interface + +The Ethernet controllers of the Marvel Kirkwood, Dove, Orion5x, +MV78xx0, Armada 370 and Armada XP have an identical unit that provides +an interface with the MDIO bus. This driver handles this MDIO +interface. + +Required properties: +- compatible: "marvell,orion-mdio" +- reg: address and length of the SMI register + +The child nodes of the MDIO driver are the individual PHY devices +connected to this MDIO bus. They must have a "reg" property given the +PHY address on the MDIO bus. + +Example at the SoC level: + +mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "marvell,orion-mdio"; + reg = <0xd0072004 0x4>; +}; + +And at the board level: + +mdio { + phy0: ethernet-phy@0 { + reg = <0>; + }; + + phy1: ethernet-phy@1 { + reg = <1>; + }; +} -- cgit v1.2.3 From c5aff18204da025fdf714f8f6423372b4b8efd00 Mon Sep 17 00:00:00 2001 From: Thomas Petazzoni Date: Fri, 17 Aug 2012 14:04:28 +0300 Subject: net: mvneta: driver for Marvell Armada 370/XP network unit This patch contains a new network driver for the network unit of the ARM Marvell Armada 370 and the Armada XP. Both SoCs use the PJ4B processor, a Marvell-developed ARM core that implements the ARMv7 instruction set. Compared to previous ARM Marvell SoCs (Kirkwood, Orion, Discovery), the network unit in Armada 370 and Armada XP is highly different. This is the reason why this new 'mvneta' driver is needed, while the older ARM Marvell SoCs use the 'mv643xx_eth' driver. Here is an overview of the most important hardware changes that require a new, specific, driver for the network unit of Armada 370/XP: - The new network unit has a completely different design and layout for the RX and TX descriptors. They are now organized as a simple array (each RX and TX queue has base address and size of this array) rather than a linked list as in the old SoCs. - The new network unit has a different RXQ and TXQ management: this management is done using special read/write counter registers, while in the Old SocS, it was done using the Ownership bit in RX and TX descriptors. - The new network unit has different interrupt registers - The new network unit way of cleaning of interrupts is not done by writing to the cause register, but by updating per-queue counters - The new network unit has different GMAC registers (link, speed, duplex configuration) and different WRR registers. - The new network unit has lots of new units like PnC (Parser and Classifier), PMT, BM (Memory Buffer Management), xPON, and more. The driver proposed in the current patch only handles the basic features. Additional hardware features will progressively be supported as needed. This code has originally been written by Rami Rosen , and then reviewed and cleaned up by Thomas Petazzoni . Signed-off-by: Thomas Petazzoni Acked-by: David S. Miller --- .../bindings/net/marvell-armada-370-neta.txt | 23 + drivers/net/ethernet/marvell/Kconfig | 13 + drivers/net/ethernet/marvell/Makefile | 1 + drivers/net/ethernet/marvell/mvneta.c | 2839 ++++++++++++++++++++ 4 files changed, 2876 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt create mode 100644 drivers/net/ethernet/marvell/mvneta.c (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt new file mode 100644 index 000000000000..c4e87f0e450e --- /dev/null +++ b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt @@ -0,0 +1,23 @@ +* Marvell Armada 370 / Armada XP Ethernet Controller (NETA) + +Required properties: +- compatible: should be "marvell,armada-370-neta". +- reg: address and length of the register set for the device. +- interrupts: interrupt for the device +- phy: A phandle to a phy node defining the PHY address (as the reg + property, a single integer). +- phy-mode: The interface between the SoC and the PHY (a string that + of_get_phy_mode() can understand) +- clock-frequency: frequency of the peripheral clock of the SoC. + +Example: + +ethernet@d0070000 { + compatible = "marvell,armada-370-neta"; + reg = <0xd0070000 0x2500>; + interrupts = <8>; + clock-frequency = <250000000>; + status = "okay"; + phy = <&phy0>; + phy-mode = "rgmii-id"; +}; diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig index 232ccb3cb08b..edfba9370922 100644 --- a/drivers/net/ethernet/marvell/Kconfig +++ b/drivers/net/ethernet/marvell/Kconfig @@ -42,6 +42,19 @@ config MVMDIO (used on Armada 370 and XP), but it could be used in the future by the MV643XX_ETH driver. +config MVNETA + tristate "Marvell Armada 370/XP network interface support" + depends on MACH_ARMADA_370_XP + select PHYLIB + select MVMDIO + ---help--- + This driver supports the network interface units in the + Marvell ARMADA XP and ARMADA 370 SoC family. + + Note that this driver is distinct from the mv643xx_eth + driver, which should be used for the older Marvell SoCs + (Dove, Orion, Discovery, Kirkwood). + config PXA168_ETH tristate "Marvell pxa168 ethernet support" depends on CPU_PXA168 diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile index 0438599fba47..7f63b4aac434 100644 --- a/drivers/net/ethernet/marvell/Makefile +++ b/drivers/net/ethernet/marvell/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_MV643XX_ETH) += mv643xx_eth.o obj-$(CONFIG_MVMDIO) += mvmdio.o +obj-$(CONFIG_MVNETA) += mvneta.o obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o obj-$(CONFIG_SKGE) += skge.o obj-$(CONFIG_SKY2) += sky2.o diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c new file mode 100644 index 000000000000..a7826f0a968e --- /dev/null +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -0,0 +1,2839 @@ +/* + * Driver for Marvell NETA network card for Armada XP and Armada 370 SoCs. + * + * Copyright (C) 2012 Marvell + * + * Rami Rosen + * Thomas Petazzoni + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Registers */ +#define MVNETA_RXQ_CONFIG_REG(q) (0x1400 + ((q) << 2)) +#define MVNETA_RXQ_HW_BUF_ALLOC BIT(1) +#define MVNETA_RXQ_PKT_OFFSET_ALL_MASK (0xf << 8) +#define MVNETA_RXQ_PKT_OFFSET_MASK(offs) ((offs) << 8) +#define MVNETA_RXQ_THRESHOLD_REG(q) (0x14c0 + ((q) << 2)) +#define MVNETA_RXQ_NON_OCCUPIED(v) ((v) << 16) +#define MVNETA_RXQ_BASE_ADDR_REG(q) (0x1480 + ((q) << 2)) +#define MVNETA_RXQ_SIZE_REG(q) (0x14a0 + ((q) << 2)) +#define MVNETA_RXQ_BUF_SIZE_SHIFT 19 +#define MVNETA_RXQ_BUF_SIZE_MASK (0x1fff << 19) +#define MVNETA_RXQ_STATUS_REG(q) (0x14e0 + ((q) << 2)) +#define MVNETA_RXQ_OCCUPIED_ALL_MASK 0x3fff +#define MVNETA_RXQ_STATUS_UPDATE_REG(q) (0x1500 + ((q) << 2)) +#define MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT 16 +#define MVNETA_RXQ_ADD_NON_OCCUPIED_MAX 255 +#define MVNETA_PORT_RX_RESET 0x1cc0 +#define MVNETA_PORT_RX_DMA_RESET BIT(0) +#define MVNETA_PHY_ADDR 0x2000 +#define MVNETA_PHY_ADDR_MASK 0x1f +#define MVNETA_MBUS_RETRY 0x2010 +#define MVNETA_UNIT_INTR_CAUSE 0x2080 +#define MVNETA_UNIT_CONTROL 0x20B0 +#define MVNETA_PHY_POLLING_ENABLE BIT(1) +#define MVNETA_WIN_BASE(w) (0x2200 + ((w) << 3)) +#define MVNETA_WIN_SIZE(w) (0x2204 + ((w) << 3)) +#define MVNETA_WIN_REMAP(w) (0x2280 + ((w) << 2)) +#define MVNETA_BASE_ADDR_ENABLE 0x2290 +#define MVNETA_PORT_CONFIG 0x2400 +#define MVNETA_UNI_PROMISC_MODE BIT(0) +#define MVNETA_DEF_RXQ(q) ((q) << 1) +#define MVNETA_DEF_RXQ_ARP(q) ((q) << 4) +#define MVNETA_TX_UNSET_ERR_SUM BIT(12) +#define MVNETA_DEF_RXQ_TCP(q) ((q) << 16) +#define MVNETA_DEF_RXQ_UDP(q) ((q) << 19) +#define MVNETA_DEF_RXQ_BPDU(q) ((q) << 22) +#define MVNETA_RX_CSUM_WITH_PSEUDO_HDR BIT(25) +#define MVNETA_PORT_CONFIG_DEFL_VALUE(q) (MVNETA_DEF_RXQ(q) | \ + MVNETA_DEF_RXQ_ARP(q) | \ + MVNETA_DEF_RXQ_TCP(q) | \ + MVNETA_DEF_RXQ_UDP(q) | \ + MVNETA_DEF_RXQ_BPDU(q) | \ + MVNETA_TX_UNSET_ERR_SUM | \ + MVNETA_RX_CSUM_WITH_PSEUDO_HDR) +#define MVNETA_PORT_CONFIG_EXTEND 0x2404 +#define MVNETA_MAC_ADDR_LOW 0x2414 +#define MVNETA_MAC_ADDR_HIGH 0x2418 +#define MVNETA_SDMA_CONFIG 0x241c +#define MVNETA_SDMA_BRST_SIZE_16 4 +#define MVNETA_NO_DESC_SWAP 0x0 +#define MVNETA_RX_BRST_SZ_MASK(burst) ((burst) << 1) +#define MVNETA_RX_NO_DATA_SWAP BIT(4) +#define MVNETA_TX_NO_DATA_SWAP BIT(5) +#define MVNETA_TX_BRST_SZ_MASK(burst) ((burst) << 22) +#define MVNETA_PORT_STATUS 0x2444 +#define MVNETA_TX_IN_PRGRS BIT(1) +#define MVNETA_TX_FIFO_EMPTY BIT(8) +#define MVNETA_RX_MIN_FRAME_SIZE 0x247c +#define MVNETA_TYPE_PRIO 0x24bc +#define MVNETA_FORCE_UNI BIT(21) +#define MVNETA_TXQ_CMD_1 0x24e4 +#define MVNETA_TXQ_CMD 0x2448 +#define MVNETA_TXQ_DISABLE_SHIFT 8 +#define MVNETA_TXQ_ENABLE_MASK 0x000000ff +#define MVNETA_ACC_MODE 0x2500 +#define MVNETA_CPU_MAP(cpu) (0x2540 + ((cpu) << 2)) +#define MVNETA_CPU_RXQ_ACCESS_ALL_MASK 0x000000ff +#define MVNETA_CPU_TXQ_ACCESS_ALL_MASK 0x0000ff00 +#define MVNETA_RXQ_TIME_COAL_REG(q) (0x2580 + ((q) << 2)) +#define MVNETA_INTR_NEW_CAUSE 0x25a0 +#define MVNETA_RX_INTR_MASK(nr_rxqs) (((1 << nr_rxqs) - 1) << 8) +#define MVNETA_INTR_NEW_MASK 0x25a4 +#define MVNETA_INTR_OLD_CAUSE 0x25a8 +#define MVNETA_INTR_OLD_MASK 0x25ac +#define MVNETA_INTR_MISC_CAUSE 0x25b0 +#define MVNETA_INTR_MISC_MASK 0x25b4 +#define MVNETA_INTR_ENABLE 0x25b8 +#define MVNETA_TXQ_INTR_ENABLE_ALL_MASK 0x0000ff00 +#define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 +#define MVNETA_RXQ_CMD 0x2680 +#define MVNETA_RXQ_DISABLE_SHIFT 8 +#define MVNETA_RXQ_ENABLE_MASK 0x000000ff +#define MVETH_TXQ_TOKEN_COUNT_REG(q) (0x2700 + ((q) << 4)) +#define MVETH_TXQ_TOKEN_CFG_REG(q) (0x2704 + ((q) << 4)) +#define MVNETA_GMAC_CTRL_0 0x2c00 +#define MVNETA_GMAC_MAX_RX_SIZE_SHIFT 2 +#define MVNETA_GMAC_MAX_RX_SIZE_MASK 0x7ffc +#define MVNETA_GMAC0_PORT_ENABLE BIT(0) +#define MVNETA_GMAC_CTRL_2 0x2c08 +#define MVNETA_GMAC2_PSC_ENABLE BIT(3) +#define MVNETA_GMAC2_PORT_RGMII BIT(4) +#define MVNETA_GMAC2_PORT_RESET BIT(6) +#define MVNETA_GMAC_STATUS 0x2c10 +#define MVNETA_GMAC_LINK_UP BIT(0) +#define MVNETA_GMAC_SPEED_1000 BIT(1) +#define MVNETA_GMAC_SPEED_100 BIT(2) +#define MVNETA_GMAC_FULL_DUPLEX BIT(3) +#define MVNETA_GMAC_RX_FLOW_CTRL_ENABLE BIT(4) +#define MVNETA_GMAC_TX_FLOW_CTRL_ENABLE BIT(5) +#define MVNETA_GMAC_RX_FLOW_CTRL_ACTIVE BIT(6) +#define MVNETA_GMAC_TX_FLOW_CTRL_ACTIVE BIT(7) +#define MVNETA_GMAC_AUTONEG_CONFIG 0x2c0c +#define MVNETA_GMAC_FORCE_LINK_DOWN BIT(0) +#define MVNETA_GMAC_FORCE_LINK_PASS BIT(1) +#define MVNETA_GMAC_CONFIG_MII_SPEED BIT(5) +#define MVNETA_GMAC_CONFIG_GMII_SPEED BIT(6) +#define MVNETA_GMAC_CONFIG_FULL_DUPLEX BIT(12) +#define MVNETA_MIB_COUNTERS_BASE 0x3080 +#define MVNETA_MIB_LATE_COLLISION 0x7c +#define MVNETA_DA_FILT_SPEC_MCAST 0x3400 +#define MVNETA_DA_FILT_OTH_MCAST 0x3500 +#define MVNETA_DA_FILT_UCAST_BASE 0x3600 +#define MVNETA_TXQ_BASE_ADDR_REG(q) (0x3c00 + ((q) << 2)) +#define MVNETA_TXQ_SIZE_REG(q) (0x3c20 + ((q) << 2)) +#define MVNETA_TXQ_SENT_THRESH_ALL_MASK 0x3fff0000 +#define MVNETA_TXQ_SENT_THRESH_MASK(coal) ((coal) << 16) +#define MVNETA_TXQ_UPDATE_REG(q) (0x3c60 + ((q) << 2)) +#define MVNETA_TXQ_DEC_SENT_SHIFT 16 +#define MVNETA_TXQ_STATUS_REG(q) (0x3c40 + ((q) << 2)) +#define MVNETA_TXQ_SENT_DESC_SHIFT 16 +#define MVNETA_TXQ_SENT_DESC_MASK 0x3fff0000 +#define MVNETA_PORT_TX_RESET 0x3cf0 +#define MVNETA_PORT_TX_DMA_RESET BIT(0) +#define MVNETA_TX_MTU 0x3e0c +#define MVNETA_TX_TOKEN_SIZE 0x3e14 +#define MVNETA_TX_TOKEN_SIZE_MAX 0xffffffff +#define MVNETA_TXQ_TOKEN_SIZE_REG(q) (0x3e40 + ((q) << 2)) +#define MVNETA_TXQ_TOKEN_SIZE_MAX 0x7fffffff + +#define MVNETA_CAUSE_TXQ_SENT_DESC_ALL_MASK 0xff + +/* Descriptor ring Macros */ +#define MVNETA_QUEUE_NEXT_DESC(q, index) \ + (((index) < (q)->last_desc) ? ((index) + 1) : 0) + +/* Various constants */ + +/* Coalescing */ +#define MVNETA_TXDONE_COAL_PKTS 16 +#define MVNETA_RX_COAL_PKTS 32 +#define MVNETA_RX_COAL_USEC 100 + +/* Timer */ +#define MVNETA_TX_DONE_TIMER_PERIOD 10 + +/* Napi polling weight */ +#define MVNETA_RX_POLL_WEIGHT 64 + +/* + * The two bytes Marvell header. Either contains a special value used + * by Marvell switches when a specific hardware mode is enabled (not + * supported by this driver) or is filled automatically by zeroes on + * the RX side. Those two bytes being at the front of the Ethernet + * header, they allow to have the IP header aligned on a 4 bytes + * boundary automatically: the hardware skips those two bytes on its + * own. + */ +#define MVNETA_MH_SIZE 2 + +#define MVNETA_VLAN_TAG_LEN 4 + +#define MVNETA_CPU_D_CACHE_LINE_SIZE 32 +#define MVNETA_TX_CSUM_MAX_SIZE 9800 +#define MVNETA_ACC_MODE_EXT 1 + +/* Timeout constants */ +#define MVNETA_TX_DISABLE_TIMEOUT_MSEC 1000 +#define MVNETA_RX_DISABLE_TIMEOUT_MSEC 1000 +#define MVNETA_TX_FIFO_EMPTY_TIMEOUT 10000 + +#define MVNETA_TX_MTU_MAX 0x3ffff + +/* Max number of Rx descriptors */ +#define MVNETA_MAX_RXD 128 + +/* Max number of Tx descriptors */ +#define MVNETA_MAX_TXD 532 + +/* descriptor aligned size */ +#define MVNETA_DESC_ALIGNED_SIZE 32 + +#define MVNETA_RX_PKT_SIZE(mtu) \ + ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \ + ETH_HLEN + ETH_FCS_LEN, \ + MVNETA_CPU_D_CACHE_LINE_SIZE) + +#define MVNETA_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) + +struct mvneta_stats { + struct u64_stats_sync syncp; + u64 packets; + u64 bytes; +}; + +struct mvneta_port { + int pkt_size; + void __iomem *base; + struct mvneta_rx_queue *rxqs; + struct mvneta_tx_queue *txqs; + struct timer_list tx_done_timer; + struct net_device *dev; + + u32 cause_rx_tx; + struct napi_struct napi; + + /* Flags */ + unsigned long flags; +#define MVNETA_F_TX_DONE_TIMER_BIT 0 + + /* Napi weight */ + int weight; + + /* Core clock */ + unsigned int clk_rate_hz; + u8 mcast_count[256]; + u16 tx_ring_size; + u16 rx_ring_size; + struct mvneta_stats tx_stats; + struct mvneta_stats rx_stats; + + struct mii_bus *mii_bus; + struct phy_device *phy_dev; + phy_interface_t phy_interface; + struct device_node *phy_node; + unsigned int link; + unsigned int duplex; + unsigned int speed; +}; + +/* + * The mvneta_tx_desc and mvneta_rx_desc structures describe the + * layout of the transmit and reception DMA descriptors, and their + * layout is therefore defined by the hardware design + */ +struct mvneta_tx_desc { + u32 command; /* Options used by HW for packet transmitting.*/ +#define MVNETA_TX_L3_OFF_SHIFT 0 +#define MVNETA_TX_IP_HLEN_SHIFT 8 +#define MVNETA_TX_L4_UDP BIT(16) +#define MVNETA_TX_L3_IP6 BIT(17) +#define MVNETA_TXD_IP_CSUM BIT(18) +#define MVNETA_TXD_Z_PAD BIT(19) +#define MVNETA_TXD_L_DESC BIT(20) +#define MVNETA_TXD_F_DESC BIT(21) +#define MVNETA_TXD_FLZ_DESC (MVNETA_TXD_Z_PAD | \ + MVNETA_TXD_L_DESC | \ + MVNETA_TXD_F_DESC) +#define MVNETA_TX_L4_CSUM_FULL BIT(30) +#define MVNETA_TX_L4_CSUM_NOT BIT(31) + + u16 reserverd1; /* csum_l4 (for future use) */ + u16 data_size; /* Data size of transmitted packet in bytes */ + u32 buf_phys_addr; /* Physical addr of transmitted buffer */ + u32 reserved2; /* hw_cmd - (for future use, PMT) */ + u32 reserved3[4]; /* Reserved - (for future use) */ +}; + +struct mvneta_rx_desc { + u32 status; /* Info about received packet */ +#define MVNETA_RXD_ERR_CRC 0x0 +#define MVNETA_RXD_ERR_SUMMARY BIT(16) +#define MVNETA_RXD_ERR_OVERRUN BIT(17) +#define MVNETA_RXD_ERR_LEN BIT(18) +#define MVNETA_RXD_ERR_RESOURCE (BIT(17) | BIT(18)) +#define MVNETA_RXD_ERR_CODE_MASK (BIT(17) | BIT(18)) +#define MVNETA_RXD_L3_IP4 BIT(25) +#define MVNETA_RXD_FIRST_LAST_DESC (BIT(26) | BIT(27)) +#define MVNETA_RXD_L4_CSUM_OK BIT(30) + + u16 reserved1; /* pnc_info - (for future use, PnC) */ + u16 data_size; /* Size of received packet in bytes */ + u32 buf_phys_addr; /* Physical address of the buffer */ + u32 reserved2; /* pnc_flow_id (for future use, PnC) */ + u32 buf_cookie; /* cookie for access to RX buffer in rx path */ + u16 reserved3; /* prefetch_cmd, for future use */ + u16 reserved4; /* csum_l4 - (for future use, PnC) */ + u32 reserved5; /* pnc_extra PnC (for future use, PnC) */ + u32 reserved6; /* hw_cmd (for future use, PnC and HWF) */ +}; + +struct mvneta_tx_queue { + /* Number of this TX queue, in the range 0-7 */ + u8 id; + + /* Number of TX DMA descriptors in the descriptor ring */ + int size; + + /* Number of currently used TX DMA descriptor in the + * descriptor ring */ + int count; + + /* Array of transmitted skb */ + struct sk_buff **tx_skb; + + /* Index of last TX DMA descriptor that was inserted */ + int txq_put_index; + + /* Index of the TX DMA descriptor to be cleaned up */ + int txq_get_index; + + u32 done_pkts_coal; + + /* Virtual address of the TX DMA descriptors array */ + struct mvneta_tx_desc *descs; + + /* DMA address of the TX DMA descriptors array */ + dma_addr_t descs_phys; + + /* Index of the last TX DMA descriptor */ + int last_desc; + + /* Index of the next TX DMA descriptor to process */ + int next_desc_to_proc; +}; + +struct mvneta_rx_queue { + /* rx queue number, in the range 0-7 */ + u8 id; + + /* num of rx descriptors in the rx descriptor ring */ + int size; + + /* counter of times when mvneta_refill() failed */ + int missed; + + u32 pkts_coal; + u32 time_coal; + + /* Virtual address of the RX DMA descriptors array */ + struct mvneta_rx_desc *descs; + + /* DMA address of the RX DMA descriptors array */ + dma_addr_t descs_phys; + + /* Index of the last RX DMA descriptor */ + int last_desc; + + /* Index of the next RX DMA descriptor to process */ + int next_desc_to_proc; +}; + +static int rxq_number = 8; +static int txq_number = 8; + +static int rxq_def; +static int txq_def; + +#define MVNETA_DRIVER_NAME "mvneta" +#define MVNETA_DRIVER_VERSION "1.0" + +/* Utility/helper methods */ + +/* Write helper method */ +static void mvreg_write(struct mvneta_port *pp, u32 offset, u32 data) +{ + writel(data, pp->base + offset); +} + +/* Read helper method */ +static u32 mvreg_read(struct mvneta_port *pp, u32 offset) +{ + return readl(pp->base + offset); +} + +/* Increment txq get counter */ +static void mvneta_txq_inc_get(struct mvneta_tx_queue *txq) +{ + txq->txq_get_index++; + if (txq->txq_get_index == txq->size) + txq->txq_get_index = 0; +} + +/* Increment txq put counter */ +static void mvneta_txq_inc_put(struct mvneta_tx_queue *txq) +{ + txq->txq_put_index++; + if (txq->txq_put_index == txq->size) + txq->txq_put_index = 0; +} + + +/* Clear all MIB counters */ +static void mvneta_mib_counters_clear(struct mvneta_port *pp) +{ + int i; + u32 dummy; + + /* Perform dummy reads from MIB counters */ + for (i = 0; i < MVNETA_MIB_LATE_COLLISION; i += 4) + dummy = mvreg_read(pp, (MVNETA_MIB_COUNTERS_BASE + i)); +} + +/* Get System Network Statistics */ +struct rtnl_link_stats64 *mvneta_get_stats64(struct net_device *dev, + struct rtnl_link_stats64 *stats) +{ + struct mvneta_port *pp = netdev_priv(dev); + unsigned int start; + + memset(stats, 0, sizeof(struct rtnl_link_stats64)); + + do { + start = u64_stats_fetch_begin_bh(&pp->rx_stats.syncp); + stats->rx_packets = pp->rx_stats.packets; + stats->rx_bytes = pp->rx_stats.bytes; + } while (u64_stats_fetch_retry_bh(&pp->rx_stats.syncp, start)); + + + do { + start = u64_stats_fetch_begin_bh(&pp->tx_stats.syncp); + stats->tx_packets = pp->tx_stats.packets; + stats->tx_bytes = pp->tx_stats.bytes; + } while (u64_stats_fetch_retry_bh(&pp->tx_stats.syncp, start)); + + stats->rx_errors = dev->stats.rx_errors; + stats->rx_dropped = dev->stats.rx_dropped; + + stats->tx_dropped = dev->stats.tx_dropped; + + return stats; +} + +/* Rx descriptors helper methods */ + +/* + * Checks whether the given RX descriptor is both the first and the + * last descriptor for the RX packet. Each RX packet is currently + * received through a single RX descriptor, so not having each RX + * descriptor with its first and last bits set is an error + */ +static int mvneta_rxq_desc_is_first_last(struct mvneta_rx_desc *desc) +{ + return (desc->status & MVNETA_RXD_FIRST_LAST_DESC) == + MVNETA_RXD_FIRST_LAST_DESC; +} + +/* Add number of descriptors ready to receive new packets */ +static void mvneta_rxq_non_occup_desc_add(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, + int ndescs) +{ + /* Only MVNETA_RXQ_ADD_NON_OCCUPIED_MAX (255) descriptors can + * be added at once */ + while (ndescs > MVNETA_RXQ_ADD_NON_OCCUPIED_MAX) { + mvreg_write(pp, MVNETA_RXQ_STATUS_UPDATE_REG(rxq->id), + (MVNETA_RXQ_ADD_NON_OCCUPIED_MAX << + MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT)); + ndescs -= MVNETA_RXQ_ADD_NON_OCCUPIED_MAX; + } + + mvreg_write(pp, MVNETA_RXQ_STATUS_UPDATE_REG(rxq->id), + (ndescs << MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT)); +} + +/* Get number of RX descriptors occupied by received packets */ +static int mvneta_rxq_busy_desc_num_get(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_RXQ_STATUS_REG(rxq->id)); + return val & MVNETA_RXQ_OCCUPIED_ALL_MASK; +} + +/* + * Update num of rx desc called upon return from rx path or + * from mvneta_rxq_drop_pkts(). + */ +static void mvneta_rxq_desc_num_update(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, + int rx_done, int rx_filled) +{ + u32 val; + + if ((rx_done <= 0xff) && (rx_filled <= 0xff)) { + val = rx_done | + (rx_filled << MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT); + mvreg_write(pp, MVNETA_RXQ_STATUS_UPDATE_REG(rxq->id), val); + return; + } + + /* Only 255 descriptors can be added at once */ + while ((rx_done > 0) || (rx_filled > 0)) { + if (rx_done <= 0xff) { + val = rx_done; + rx_done = 0; + } else { + val = 0xff; + rx_done -= 0xff; + } + if (rx_filled <= 0xff) { + val |= rx_filled << MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT; + rx_filled = 0; + } else { + val |= 0xff << MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT; + rx_filled -= 0xff; + } + mvreg_write(pp, MVNETA_RXQ_STATUS_UPDATE_REG(rxq->id), val); + } +} + +/* Get pointer to next RX descriptor to be processed by SW */ +static struct mvneta_rx_desc * +mvneta_rxq_next_desc_get(struct mvneta_rx_queue *rxq) +{ + int rx_desc = rxq->next_desc_to_proc; + + rxq->next_desc_to_proc = MVNETA_QUEUE_NEXT_DESC(rxq, rx_desc); + return rxq->descs + rx_desc; +} + +/* Change maximum receive size of the port. */ +static void mvneta_max_rx_size_set(struct mvneta_port *pp, int max_rx_size) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_GMAC_CTRL_0); + val &= ~MVNETA_GMAC_MAX_RX_SIZE_MASK; + val |= ((max_rx_size - MVNETA_MH_SIZE) / 2) << + MVNETA_GMAC_MAX_RX_SIZE_SHIFT; + mvreg_write(pp, MVNETA_GMAC_CTRL_0, val); +} + + +/* Set rx queue offset */ +static void mvneta_rxq_offset_set(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, + int offset) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_RXQ_CONFIG_REG(rxq->id)); + val &= ~MVNETA_RXQ_PKT_OFFSET_ALL_MASK; + + /* Offset is in */ + val |= MVNETA_RXQ_PKT_OFFSET_MASK(offset >> 3); + mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val); +} + + +/* Tx descriptors helper methods */ + +/* Update HW with number of TX descriptors to be sent */ +static void mvneta_txq_pend_desc_add(struct mvneta_port *pp, + struct mvneta_tx_queue *txq, + int pend_desc) +{ + u32 val; + + /* Only 255 descriptors can be added at once ; Assume caller + process TX desriptors in quanta less than 256 */ + val = pend_desc; + mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); +} + +/* Get pointer to next TX descriptor to be processed (send) by HW */ +static struct mvneta_tx_desc * +mvneta_txq_next_desc_get(struct mvneta_tx_queue *txq) +{ + int tx_desc = txq->next_desc_to_proc; + + txq->next_desc_to_proc = MVNETA_QUEUE_NEXT_DESC(txq, tx_desc); + return txq->descs + tx_desc; +} + +/* Release the last allocated TX descriptor. Useful to handle DMA + * mapping failures in the TX path. */ +static void mvneta_txq_desc_put(struct mvneta_tx_queue *txq) +{ + if (txq->next_desc_to_proc == 0) + txq->next_desc_to_proc = txq->last_desc - 1; + else + txq->next_desc_to_proc--; +} + +/* Set rxq buf size */ +static void mvneta_rxq_buf_size_set(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, + int buf_size) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_RXQ_SIZE_REG(rxq->id)); + + val &= ~MVNETA_RXQ_BUF_SIZE_MASK; + val |= ((buf_size >> 3) << MVNETA_RXQ_BUF_SIZE_SHIFT); + + mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), val); +} + +/* Disable buffer management (BM) */ +static void mvneta_rxq_bm_disable(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_RXQ_CONFIG_REG(rxq->id)); + val &= ~MVNETA_RXQ_HW_BUF_ALLOC; + mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val); +} + + + +/* Sets the RGMII Enable bit (RGMIIEn) in port MAC control register */ +static void __devinit mvneta_gmac_rgmii_set(struct mvneta_port *pp, int enable) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); + + if (enable) + val |= MVNETA_GMAC2_PORT_RGMII; + else + val &= ~MVNETA_GMAC2_PORT_RGMII; + + mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); +} + +/* Config SGMII port */ +static void __devinit mvneta_port_sgmii_config(struct mvneta_port *pp) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); + val |= MVNETA_GMAC2_PSC_ENABLE; + mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); +} + +/* Start the Ethernet port RX and TX activity */ +static void mvneta_port_up(struct mvneta_port *pp) +{ + int queue; + u32 q_map; + + /* Enable all initialized TXs. */ + mvneta_mib_counters_clear(pp); + q_map = 0; + for (queue = 0; queue < txq_number; queue++) { + struct mvneta_tx_queue *txq = &pp->txqs[queue]; + if (txq->descs != NULL) + q_map |= (1 << queue); + } + mvreg_write(pp, MVNETA_TXQ_CMD, q_map); + + /* Enable all initialized RXQs. */ + q_map = 0; + for (queue = 0; queue < rxq_number; queue++) { + struct mvneta_rx_queue *rxq = &pp->rxqs[queue]; + if (rxq->descs != NULL) + q_map |= (1 << queue); + } + + mvreg_write(pp, MVNETA_RXQ_CMD, q_map); +} + +/* Stop the Ethernet port activity */ +static void mvneta_port_down(struct mvneta_port *pp) +{ + u32 val; + int count; + + /* Stop Rx port activity. Check port Rx activity. */ + val = mvreg_read(pp, MVNETA_RXQ_CMD) & MVNETA_RXQ_ENABLE_MASK; + + /* Issue stop command for active channels only */ + if (val != 0) + mvreg_write(pp, MVNETA_RXQ_CMD, + val << MVNETA_RXQ_DISABLE_SHIFT); + + /* Wait for all Rx activity to terminate. */ + count = 0; + do { + if (count++ >= MVNETA_RX_DISABLE_TIMEOUT_MSEC) { + netdev_warn(pp->dev, + "TIMEOUT for RX stopped ! rx_queue_cmd: 0x08%x\n", + val); + break; + } + mdelay(1); + + val = mvreg_read(pp, MVNETA_RXQ_CMD); + } while (val & 0xff); + + /* Stop Tx port activity. Check port Tx activity. Issue stop + command for active channels only */ + val = (mvreg_read(pp, MVNETA_TXQ_CMD)) & MVNETA_TXQ_ENABLE_MASK; + + if (val != 0) + mvreg_write(pp, MVNETA_TXQ_CMD, + (val << MVNETA_TXQ_DISABLE_SHIFT)); + + /* Wait for all Tx activity to terminate. */ + count = 0; + do { + if (count++ >= MVNETA_TX_DISABLE_TIMEOUT_MSEC) { + netdev_warn(pp->dev, + "TIMEOUT for TX stopped status=0x%08x\n", + val); + break; + } + mdelay(1); + + /* Check TX Command reg that all Txqs are stopped */ + val = mvreg_read(pp, MVNETA_TXQ_CMD); + + } while (val & 0xff); + + /* Double check to verify that TX FIFO is empty */ + count = 0; + do { + if (count++ >= MVNETA_TX_FIFO_EMPTY_TIMEOUT) { + netdev_warn(pp->dev, + "TX FIFO empty timeout status=0x08%x\n", + val); + break; + } + mdelay(1); + + val = mvreg_read(pp, MVNETA_PORT_STATUS); + } while (!(val & MVNETA_TX_FIFO_EMPTY) && + (val & MVNETA_TX_IN_PRGRS)); + + udelay(200); +} + +/* Enable the port by setting the port enable bit of the MAC control register */ +static void mvneta_port_enable(struct mvneta_port *pp) +{ + u32 val; + + /* Enable port */ + val = mvreg_read(pp, MVNETA_GMAC_CTRL_0); + val |= MVNETA_GMAC0_PORT_ENABLE; + mvreg_write(pp, MVNETA_GMAC_CTRL_0, val); +} + +/* Disable the port and wait for about 200 usec before retuning */ +static void mvneta_port_disable(struct mvneta_port *pp) +{ + u32 val; + + /* Reset the Enable bit in the Serial Control Register */ + val = mvreg_read(pp, MVNETA_GMAC_CTRL_0); + val &= ~MVNETA_GMAC0_PORT_ENABLE; + mvreg_write(pp, MVNETA_GMAC_CTRL_0, val); + + udelay(200); +} + +/* Multicast tables methods */ + +/* Set all entries in Unicast MAC Table; queue==-1 means reject all */ +static void mvneta_set_ucast_table(struct mvneta_port *pp, int queue) +{ + int offset; + u32 val; + + if (queue == -1) { + val = 0; + } else { + val = 0x1 | (queue << 1); + val |= (val << 24) | (val << 16) | (val << 8); + } + + for (offset = 0; offset <= 0xc; offset += 4) + mvreg_write(pp, MVNETA_DA_FILT_UCAST_BASE + offset, val); +} + +/* Set all entries in Special Multicast MAC Table; queue==-1 means reject all */ +static void mvneta_set_special_mcast_table(struct mvneta_port *pp, int queue) +{ + int offset; + u32 val; + + if (queue == -1) { + val = 0; + } else { + val = 0x1 | (queue << 1); + val |= (val << 24) | (val << 16) | (val << 8); + } + + for (offset = 0; offset <= 0xfc; offset += 4) + mvreg_write(pp, MVNETA_DA_FILT_SPEC_MCAST + offset, val); + +} + +/* Set all entries in Other Multicast MAC Table. queue==-1 means reject all */ +static void mvneta_set_other_mcast_table(struct mvneta_port *pp, int queue) +{ + int offset; + u32 val; + + if (queue == -1) { + memset(pp->mcast_count, 0, sizeof(pp->mcast_count)); + val = 0; + } else { + memset(pp->mcast_count, 1, sizeof(pp->mcast_count)); + val = 0x1 | (queue << 1); + val |= (val << 24) | (val << 16) | (val << 8); + } + + for (offset = 0; offset <= 0xfc; offset += 4) + mvreg_write(pp, MVNETA_DA_FILT_OTH_MCAST + offset, val); +} + +/* This method sets defaults to the NETA port: + * Clears interrupt Cause and Mask registers. + * Clears all MAC tables. + * Sets defaults to all registers. + * Resets RX and TX descriptor rings. + * Resets PHY. + * This method can be called after mvneta_port_down() to return the port + * settings to defaults. + */ +static void mvneta_defaults_set(struct mvneta_port *pp) +{ + int cpu; + int queue; + u32 val; + + /* Clear all Cause registers */ + mvreg_write(pp, MVNETA_INTR_NEW_CAUSE, 0); + mvreg_write(pp, MVNETA_INTR_OLD_CAUSE, 0); + mvreg_write(pp, MVNETA_INTR_MISC_CAUSE, 0); + + /* Mask all interrupts */ + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0); + mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0); + mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0); + mvreg_write(pp, MVNETA_INTR_ENABLE, 0); + + /* Enable MBUS Retry bit16 */ + mvreg_write(pp, MVNETA_MBUS_RETRY, 0x20); + + /* Set CPU queue access map - all CPUs have access to all RX + queues and to all TX queues */ + for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) + mvreg_write(pp, MVNETA_CPU_MAP(cpu), + (MVNETA_CPU_RXQ_ACCESS_ALL_MASK | + MVNETA_CPU_TXQ_ACCESS_ALL_MASK)); + + /* Reset RX and TX DMAs */ + mvreg_write(pp, MVNETA_PORT_RX_RESET, MVNETA_PORT_RX_DMA_RESET); + mvreg_write(pp, MVNETA_PORT_TX_RESET, MVNETA_PORT_TX_DMA_RESET); + + /* Disable Legacy WRR, Disable EJP, Release from reset */ + mvreg_write(pp, MVNETA_TXQ_CMD_1, 0); + for (queue = 0; queue < txq_number; queue++) { + mvreg_write(pp, MVETH_TXQ_TOKEN_COUNT_REG(queue), 0); + mvreg_write(pp, MVETH_TXQ_TOKEN_CFG_REG(queue), 0); + } + + mvreg_write(pp, MVNETA_PORT_TX_RESET, 0); + mvreg_write(pp, MVNETA_PORT_RX_RESET, 0); + + /* Set Port Acceleration Mode */ + val = MVNETA_ACC_MODE_EXT; + mvreg_write(pp, MVNETA_ACC_MODE, val); + + /* Update val of portCfg register accordingly with all RxQueue types */ + val = MVNETA_PORT_CONFIG_DEFL_VALUE(rxq_def); + mvreg_write(pp, MVNETA_PORT_CONFIG, val); + + val = 0; + mvreg_write(pp, MVNETA_PORT_CONFIG_EXTEND, val); + mvreg_write(pp, MVNETA_RX_MIN_FRAME_SIZE, 64); + + /* Build PORT_SDMA_CONFIG_REG */ + val = 0; + + /* Default burst size */ + val |= MVNETA_TX_BRST_SZ_MASK(MVNETA_SDMA_BRST_SIZE_16); + val |= MVNETA_RX_BRST_SZ_MASK(MVNETA_SDMA_BRST_SIZE_16); + + val |= (MVNETA_RX_NO_DATA_SWAP | MVNETA_TX_NO_DATA_SWAP | + MVNETA_NO_DESC_SWAP); + + /* Assign port SDMA configuration */ + mvreg_write(pp, MVNETA_SDMA_CONFIG, val); + + mvneta_set_ucast_table(pp, -1); + mvneta_set_special_mcast_table(pp, -1); + mvneta_set_other_mcast_table(pp, -1); + + /* Set port interrupt enable register - default enable all */ + mvreg_write(pp, MVNETA_INTR_ENABLE, + (MVNETA_RXQ_INTR_ENABLE_ALL_MASK + | MVNETA_TXQ_INTR_ENABLE_ALL_MASK)); +} + +/* Set max sizes for tx queues */ +static void mvneta_txq_max_tx_size_set(struct mvneta_port *pp, int max_tx_size) + +{ + u32 val, size, mtu; + int queue; + + mtu = max_tx_size * 8; + if (mtu > MVNETA_TX_MTU_MAX) + mtu = MVNETA_TX_MTU_MAX; + + /* Set MTU */ + val = mvreg_read(pp, MVNETA_TX_MTU); + val &= ~MVNETA_TX_MTU_MAX; + val |= mtu; + mvreg_write(pp, MVNETA_TX_MTU, val); + + /* TX token size and all TXQs token size must be larger that MTU */ + val = mvreg_read(pp, MVNETA_TX_TOKEN_SIZE); + + size = val & MVNETA_TX_TOKEN_SIZE_MAX; + if (size < mtu) { + size = mtu; + val &= ~MVNETA_TX_TOKEN_SIZE_MAX; + val |= size; + mvreg_write(pp, MVNETA_TX_TOKEN_SIZE, val); + } + for (queue = 0; queue < txq_number; queue++) { + val = mvreg_read(pp, MVNETA_TXQ_TOKEN_SIZE_REG(queue)); + + size = val & MVNETA_TXQ_TOKEN_SIZE_MAX; + if (size < mtu) { + size = mtu; + val &= ~MVNETA_TXQ_TOKEN_SIZE_MAX; + val |= size; + mvreg_write(pp, MVNETA_TXQ_TOKEN_SIZE_REG(queue), val); + } + } +} + +/* Set unicast address */ +static void mvneta_set_ucast_addr(struct mvneta_port *pp, u8 last_nibble, + int queue) +{ + unsigned int unicast_reg; + unsigned int tbl_offset; + unsigned int reg_offset; + + /* Locate the Unicast table entry */ + last_nibble = (0xf & last_nibble); + + /* offset from unicast tbl base */ + tbl_offset = (last_nibble / 4) * 4; + + /* offset within the above reg */ + reg_offset = last_nibble % 4; + + unicast_reg = mvreg_read(pp, (MVNETA_DA_FILT_UCAST_BASE + tbl_offset)); + + if (queue == -1) { + /* Clear accepts frame bit at specified unicast DA tbl entry */ + unicast_reg &= ~(0xff << (8 * reg_offset)); + } else { + unicast_reg &= ~(0xff << (8 * reg_offset)); + unicast_reg |= ((0x01 | (queue << 1)) << (8 * reg_offset)); + } + + mvreg_write(pp, (MVNETA_DA_FILT_UCAST_BASE + tbl_offset), unicast_reg); +} + +/* Set mac address */ +static void mvneta_mac_addr_set(struct mvneta_port *pp, unsigned char *addr, + int queue) +{ + unsigned int mac_h; + unsigned int mac_l; + + if (queue != -1) { + mac_l = (addr[4] << 8) | (addr[5]); + mac_h = (addr[0] << 24) | (addr[1] << 16) | + (addr[2] << 8) | (addr[3] << 0); + + mvreg_write(pp, MVNETA_MAC_ADDR_LOW, mac_l); + mvreg_write(pp, MVNETA_MAC_ADDR_HIGH, mac_h); + } + + /* Accept frames of this address */ + mvneta_set_ucast_addr(pp, addr[5], queue); +} + +/* + * Set the number of packets that will be received before + * RX interrupt will be generated by HW. + */ +static void mvneta_rx_pkts_coal_set(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, u32 value) +{ + mvreg_write(pp, MVNETA_RXQ_THRESHOLD_REG(rxq->id), + value | MVNETA_RXQ_NON_OCCUPIED(0)); + rxq->pkts_coal = value; +} + +/* + * Set the time delay in usec before + * RX interrupt will be generated by HW. + */ +static void mvneta_rx_time_coal_set(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq, u32 value) +{ + u32 val = (pp->clk_rate_hz / 1000000) * value; + + mvreg_write(pp, MVNETA_RXQ_TIME_COAL_REG(rxq->id), val); + rxq->time_coal = value; +} + +/* Set threshold for TX_DONE pkts coalescing */ +static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp, + struct mvneta_tx_queue *txq, u32 value) +{ + u32 val; + + val = mvreg_read(pp, MVNETA_TXQ_SIZE_REG(txq->id)); + + val &= ~MVNETA_TXQ_SENT_THRESH_ALL_MASK; + val |= MVNETA_TXQ_SENT_THRESH_MASK(value); + + mvreg_write(pp, MVNETA_TXQ_SIZE_REG(txq->id), val); + + txq->done_pkts_coal = value; +} + +/* Trigger tx done timer in MVNETA_TX_DONE_TIMER_PERIOD msecs */ +static void mvneta_add_tx_done_timer(struct mvneta_port *pp) +{ + if (test_and_set_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags) == 0) { + pp->tx_done_timer.expires = jiffies + + msecs_to_jiffies(MVNETA_TX_DONE_TIMER_PERIOD); + add_timer(&pp->tx_done_timer); + } +} + + +/* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ +static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, + u32 phys_addr, u32 cookie) +{ + rx_desc->buf_cookie = cookie; + rx_desc->buf_phys_addr = phys_addr; +} + +/* Decrement sent descriptors counter */ +static void mvneta_txq_sent_desc_dec(struct mvneta_port *pp, + struct mvneta_tx_queue *txq, + int sent_desc) +{ + u32 val; + + /* Only 255 TX descriptors can be updated at once */ + while (sent_desc > 0xff) { + val = 0xff << MVNETA_TXQ_DEC_SENT_SHIFT; + mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); + sent_desc = sent_desc - 0xff; + } + + val = sent_desc << MVNETA_TXQ_DEC_SENT_SHIFT; + mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); +} + +/* Get number of TX descriptors already sent by HW */ +static int mvneta_txq_sent_desc_num_get(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) +{ + u32 val; + int sent_desc; + + val = mvreg_read(pp, MVNETA_TXQ_STATUS_REG(txq->id)); + sent_desc = (val & MVNETA_TXQ_SENT_DESC_MASK) >> + MVNETA_TXQ_SENT_DESC_SHIFT; + + return sent_desc; +} + +/* + * Get number of sent descriptors and decrement counter. + * The number of sent descriptors is returned. + */ +static int mvneta_txq_sent_desc_proc(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) +{ + int sent_desc; + + /* Get number of sent descriptors */ + sent_desc = mvneta_txq_sent_desc_num_get(pp, txq); + + /* Decrement sent descriptors counter */ + if (sent_desc) + mvneta_txq_sent_desc_dec(pp, txq, sent_desc); + + return sent_desc; +} + +/* Set TXQ descriptors fields relevant for CSUM calculation */ +static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, + int ip_hdr_len, int l4_proto) +{ + u32 command; + + /* Fields: L3_offset, IP_hdrlen, L3_type, G_IPv4_chk, + G_L4_chk, L4_type; required only for checksum + calculation */ + command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; + command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; + + if (l3_proto == swab16(ETH_P_IP)) + command |= MVNETA_TXD_IP_CSUM; + else + command |= MVNETA_TX_L3_IP6; + + if (l4_proto == IPPROTO_TCP) + command |= MVNETA_TX_L4_CSUM_FULL; + else if (l4_proto == IPPROTO_UDP) + command |= MVNETA_TX_L4_UDP | MVNETA_TX_L4_CSUM_FULL; + else + command |= MVNETA_TX_L4_CSUM_NOT; + + return command; +} + + +/* Display more error info */ +static void mvneta_rx_error(struct mvneta_port *pp, + struct mvneta_rx_desc *rx_desc) +{ + u32 status = rx_desc->status; + + if (!mvneta_rxq_desc_is_first_last(rx_desc)) { + netdev_err(pp->dev, + "bad rx status %08x (buffer oversize), size=%d\n", + rx_desc->status, rx_desc->data_size); + return; + } + + switch (status & MVNETA_RXD_ERR_CODE_MASK) { + case MVNETA_RXD_ERR_CRC: + netdev_err(pp->dev, "bad rx status %08x (crc error), size=%d\n", + status, rx_desc->data_size); + break; + case MVNETA_RXD_ERR_OVERRUN: + netdev_err(pp->dev, "bad rx status %08x (overrun error), size=%d\n", + status, rx_desc->data_size); + break; + case MVNETA_RXD_ERR_LEN: + netdev_err(pp->dev, "bad rx status %08x (max frame length error), size=%d\n", + status, rx_desc->data_size); + break; + case MVNETA_RXD_ERR_RESOURCE: + netdev_err(pp->dev, "bad rx status %08x (resource error), size=%d\n", + status, rx_desc->data_size); + break; + } +} + +/* Handle RX checksum offload */ +static void mvneta_rx_csum(struct mvneta_port *pp, + struct mvneta_rx_desc *rx_desc, + struct sk_buff *skb) +{ + if ((rx_desc->status & MVNETA_RXD_L3_IP4) && + (rx_desc->status & MVNETA_RXD_L4_CSUM_OK)) { + skb->csum = 0; + skb->ip_summed = CHECKSUM_UNNECESSARY; + return; + } + + skb->ip_summed = CHECKSUM_NONE; +} + +/* Return tx queue pointer (find last set bit) according to causeTxDone reg */ +static struct mvneta_tx_queue *mvneta_tx_done_policy(struct mvneta_port *pp, + u32 cause) +{ + int queue = fls(cause) - 1; + + return (queue < 0 || queue >= txq_number) ? NULL : &pp->txqs[queue]; +} + +/* Free tx queue skbuffs */ +static void mvneta_txq_bufs_free(struct mvneta_port *pp, + struct mvneta_tx_queue *txq, int num) +{ + int i; + + for (i = 0; i < num; i++) { + struct mvneta_tx_desc *tx_desc = txq->descs + + txq->txq_get_index; + struct sk_buff *skb = txq->tx_skb[txq->txq_get_index]; + + mvneta_txq_inc_get(txq); + + if (!skb) + continue; + + dma_unmap_single(pp->dev->dev.parent, tx_desc->buf_phys_addr, + tx_desc->data_size, DMA_TO_DEVICE); + dev_kfree_skb_any(skb); + } +} + +/* Handle end of transmission */ +static int mvneta_txq_done(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) +{ + struct netdev_queue *nq = netdev_get_tx_queue(pp->dev, txq->id); + int tx_done; + + tx_done = mvneta_txq_sent_desc_proc(pp, txq); + if (tx_done == 0) + return tx_done; + mvneta_txq_bufs_free(pp, txq, tx_done); + + txq->count -= tx_done; + + if (netif_tx_queue_stopped(nq)) { + if (txq->size - txq->count >= MAX_SKB_FRAGS + 1) + netif_tx_wake_queue(nq); + } + + return tx_done; +} + +/* Refill processing */ +static int mvneta_rx_refill(struct mvneta_port *pp, + struct mvneta_rx_desc *rx_desc) + +{ + dma_addr_t phys_addr; + struct sk_buff *skb; + + skb = netdev_alloc_skb(pp->dev, pp->pkt_size); + if (!skb) + return -ENOMEM; + + phys_addr = dma_map_single(pp->dev->dev.parent, skb->head, + MVNETA_RX_BUF_SIZE(pp->pkt_size), + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) { + dev_kfree_skb(skb); + return -ENOMEM; + } + + mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)skb); + + return 0; +} + +/* Handle tx checksum */ +static u32 mvneta_skb_tx_csum(struct mvneta_port *pp, struct sk_buff *skb) +{ + if (skb->ip_summed == CHECKSUM_PARTIAL) { + int ip_hdr_len = 0; + u8 l4_proto; + + if (skb->protocol == htons(ETH_P_IP)) { + struct iphdr *ip4h = ip_hdr(skb); + + /* Calculate IPv4 checksum and L4 checksum */ + ip_hdr_len = ip4h->ihl; + l4_proto = ip4h->protocol; + } else if (skb->protocol == htons(ETH_P_IPV6)) { + struct ipv6hdr *ip6h = ipv6_hdr(skb); + + /* Read l4_protocol from one of IPv6 extra headers */ + if (skb_network_header_len(skb) > 0) + ip_hdr_len = (skb_network_header_len(skb) >> 2); + l4_proto = ip6h->nexthdr; + } else + return MVNETA_TX_L4_CSUM_NOT; + + return mvneta_txq_desc_csum(skb_network_offset(skb), + skb->protocol, ip_hdr_len, l4_proto); + } + + return MVNETA_TX_L4_CSUM_NOT; +} + +/* + * Returns rx queue pointer (find last set bit) according to causeRxTx + * value + */ +static struct mvneta_rx_queue *mvneta_rx_policy(struct mvneta_port *pp, + u32 cause) +{ + int queue = fls(cause >> 8) - 1; + + return (queue < 0 || queue >= rxq_number) ? NULL : &pp->rxqs[queue]; +} + +/* Drop packets received by the RXQ and free buffers */ +static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq) +{ + int rx_done, i; + + rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq); + for (i = 0; i < rxq->size; i++) { + struct mvneta_rx_desc *rx_desc = rxq->descs + i; + struct sk_buff *skb = (struct sk_buff *)rx_desc->buf_cookie; + + dev_kfree_skb_any(skb); + dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, + rx_desc->data_size, DMA_FROM_DEVICE); + } + + if (rx_done) + mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done); +} + +/* Main rx processing */ +static int mvneta_rx(struct mvneta_port *pp, int rx_todo, + struct mvneta_rx_queue *rxq) +{ + struct net_device *dev = pp->dev; + int rx_done, rx_filled; + + /* Get number of received packets */ + rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq); + + if (rx_todo > rx_done) + rx_todo = rx_done; + + rx_done = 0; + rx_filled = 0; + + /* Fairness NAPI loop */ + while (rx_done < rx_todo) { + struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); + struct sk_buff *skb; + u32 rx_status; + int rx_bytes, err; + + prefetch(rx_desc); + rx_done++; + rx_filled++; + rx_status = rx_desc->status; + skb = (struct sk_buff *)rx_desc->buf_cookie; + + if (!mvneta_rxq_desc_is_first_last(rx_desc) || + (rx_status & MVNETA_RXD_ERR_SUMMARY)) { + dev->stats.rx_errors++; + mvneta_rx_error(pp, rx_desc); + mvneta_rx_desc_fill(rx_desc, rx_desc->buf_phys_addr, + (u32)skb); + continue; + } + + dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, + rx_desc->data_size, DMA_FROM_DEVICE); + + rx_bytes = rx_desc->data_size - + (ETH_FCS_LEN + MVNETA_MH_SIZE); + u64_stats_update_begin(&pp->rx_stats.syncp); + pp->rx_stats.packets++; + pp->rx_stats.bytes += rx_bytes; + u64_stats_update_end(&pp->rx_stats.syncp); + + /* Linux processing */ + skb_reserve(skb, MVNETA_MH_SIZE); + skb_put(skb, rx_bytes); + + skb->protocol = eth_type_trans(skb, dev); + + mvneta_rx_csum(pp, rx_desc, skb); + + napi_gro_receive(&pp->napi, skb); + + /* Refill processing */ + err = mvneta_rx_refill(pp, rx_desc); + if (err) { + netdev_err(pp->dev, "Linux processing - Can't refill\n"); + rxq->missed++; + rx_filled--; + } + } + + /* Update rxq management counters */ + mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_filled); + + return rx_done; +} + +/* Handle tx fragmentation processing */ +static int mvneta_tx_frag_process(struct mvneta_port *pp, struct sk_buff *skb, + struct mvneta_tx_queue *txq) +{ + struct mvneta_tx_desc *tx_desc; + int i; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + void *addr = page_address(frag->page.p) + frag->page_offset; + + tx_desc = mvneta_txq_next_desc_get(txq); + tx_desc->data_size = frag->size; + + tx_desc->buf_phys_addr = + dma_map_single(pp->dev->dev.parent, addr, + tx_desc->data_size, DMA_TO_DEVICE); + + if (dma_mapping_error(pp->dev->dev.parent, + tx_desc->buf_phys_addr)) { + mvneta_txq_desc_put(txq); + goto error; + } + + if (i == (skb_shinfo(skb)->nr_frags - 1)) { + /* Last descriptor */ + tx_desc->command = MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD; + + txq->tx_skb[txq->txq_put_index] = skb; + + mvneta_txq_inc_put(txq); + } else { + /* Descriptor in the middle: Not First, Not Last */ + tx_desc->command = 0; + + txq->tx_skb[txq->txq_put_index] = NULL; + mvneta_txq_inc_put(txq); + } + } + + return 0; + +error: + /* Release all descriptors that were used to map fragments of + * this packet, as well as the corresponding DMA mappings */ + for (i = i - 1; i >= 0; i--) { + tx_desc = txq->descs + i; + dma_unmap_single(pp->dev->dev.parent, + tx_desc->buf_phys_addr, + tx_desc->data_size, + DMA_TO_DEVICE); + mvneta_txq_desc_put(txq); + } + + return -ENOMEM; +} + +/* Main tx processing */ +static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + struct mvneta_tx_queue *txq = &pp->txqs[txq_def]; + struct mvneta_tx_desc *tx_desc; + struct netdev_queue *nq; + int frags = 0; + u32 tx_cmd; + + if (!netif_running(dev)) + goto out; + + frags = skb_shinfo(skb)->nr_frags + 1; + nq = netdev_get_tx_queue(dev, txq_def); + + /* Get a descriptor for the first part of the packet */ + tx_desc = mvneta_txq_next_desc_get(txq); + + tx_cmd = mvneta_skb_tx_csum(pp, skb); + + tx_desc->data_size = skb_headlen(skb); + + tx_desc->buf_phys_addr = dma_map_single(dev->dev.parent, skb->data, + tx_desc->data_size, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, + tx_desc->buf_phys_addr))) { + mvneta_txq_desc_put(txq); + frags = 0; + goto out; + } + + if (frags == 1) { + /* First and Last descriptor */ + tx_cmd |= MVNETA_TXD_FLZ_DESC; + tx_desc->command = tx_cmd; + txq->tx_skb[txq->txq_put_index] = skb; + mvneta_txq_inc_put(txq); + } else { + /* First but not Last */ + tx_cmd |= MVNETA_TXD_F_DESC; + txq->tx_skb[txq->txq_put_index] = NULL; + mvneta_txq_inc_put(txq); + tx_desc->command = tx_cmd; + /* Continue with other skb fragments */ + if (mvneta_tx_frag_process(pp, skb, txq)) { + dma_unmap_single(dev->dev.parent, + tx_desc->buf_phys_addr, + tx_desc->data_size, + DMA_TO_DEVICE); + mvneta_txq_desc_put(txq); + frags = 0; + goto out; + } + } + + txq->count += frags; + mvneta_txq_pend_desc_add(pp, txq, frags); + + if (txq->size - txq->count < MAX_SKB_FRAGS + 1) + netif_tx_stop_queue(nq); + +out: + if (frags > 0) { + u64_stats_update_begin(&pp->tx_stats.syncp); + pp->tx_stats.packets++; + pp->tx_stats.bytes += skb->len; + u64_stats_update_end(&pp->tx_stats.syncp); + + } else { + dev->stats.tx_dropped++; + dev_kfree_skb_any(skb); + } + + if (txq->count >= MVNETA_TXDONE_COAL_PKTS) + mvneta_txq_done(pp, txq); + + /* If after calling mvneta_txq_done, count equals + frags, we need to set the timer */ + if (txq->count == frags && frags > 0) + mvneta_add_tx_done_timer(pp); + + return NETDEV_TX_OK; +} + + +/* Free tx resources, when resetting a port */ +static void mvneta_txq_done_force(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) + +{ + int tx_done = txq->count; + + mvneta_txq_bufs_free(pp, txq, tx_done); + + /* reset txq */ + txq->count = 0; + txq->txq_put_index = 0; + txq->txq_get_index = 0; +} + +/* handle tx done - called from tx done timer callback */ +static u32 mvneta_tx_done_gbe(struct mvneta_port *pp, u32 cause_tx_done, + int *tx_todo) +{ + struct mvneta_tx_queue *txq; + u32 tx_done = 0; + struct netdev_queue *nq; + + *tx_todo = 0; + while (cause_tx_done != 0) { + txq = mvneta_tx_done_policy(pp, cause_tx_done); + if (!txq) + break; + + nq = netdev_get_tx_queue(pp->dev, txq->id); + __netif_tx_lock(nq, smp_processor_id()); + + if (txq->count) { + tx_done += mvneta_txq_done(pp, txq); + *tx_todo += txq->count; + } + + __netif_tx_unlock(nq); + cause_tx_done &= ~((1 << txq->id)); + } + + return tx_done; +} + +/* + * Compute crc8 of the specified address, using a unique algorithm , + * according to hw spec, different than generic crc8 algorithm + */ +static int mvneta_addr_crc(unsigned char *addr) +{ + int crc = 0; + int i; + + for (i = 0; i < ETH_ALEN; i++) { + int j; + + crc = (crc ^ addr[i]) << 8; + for (j = 7; j >= 0; j--) { + if (crc & (0x100 << j)) + crc ^= 0x107 << j; + } + } + + return crc; +} + +/* This method controls the net device special MAC multicast support. + * The Special Multicast Table for MAC addresses supports MAC of the form + * 0x01-00-5E-00-00-XX (where XX is between 0x00 and 0xFF). + * The MAC DA[7:0] bits are used as a pointer to the Special Multicast + * Table entries in the DA-Filter table. This method set the Special + * Multicast Table appropriate entry. + */ +static void mvneta_set_special_mcast_addr(struct mvneta_port *pp, + unsigned char last_byte, + int queue) +{ + unsigned int smc_table_reg; + unsigned int tbl_offset; + unsigned int reg_offset; + + /* Register offset from SMC table base */ + tbl_offset = (last_byte / 4); + /* Entry offset within the above reg */ + reg_offset = last_byte % 4; + + smc_table_reg = mvreg_read(pp, (MVNETA_DA_FILT_SPEC_MCAST + + tbl_offset * 4)); + + if (queue == -1) + smc_table_reg &= ~(0xff << (8 * reg_offset)); + else { + smc_table_reg &= ~(0xff << (8 * reg_offset)); + smc_table_reg |= ((0x01 | (queue << 1)) << (8 * reg_offset)); + } + + mvreg_write(pp, MVNETA_DA_FILT_SPEC_MCAST + tbl_offset * 4, + smc_table_reg); +} + +/* This method controls the network device Other MAC multicast support. + * The Other Multicast Table is used for multicast of another type. + * A CRC-8 is used as an index to the Other Multicast Table entries + * in the DA-Filter table. + * The method gets the CRC-8 value from the calling routine and + * sets the Other Multicast Table appropriate entry according to the + * specified CRC-8 . + */ +static void mvneta_set_other_mcast_addr(struct mvneta_port *pp, + unsigned char crc8, + int queue) +{ + unsigned int omc_table_reg; + unsigned int tbl_offset; + unsigned int reg_offset; + + tbl_offset = (crc8 / 4) * 4; /* Register offset from OMC table base */ + reg_offset = crc8 % 4; /* Entry offset within the above reg */ + + omc_table_reg = mvreg_read(pp, MVNETA_DA_FILT_OTH_MCAST + tbl_offset); + + if (queue == -1) { + /* Clear accepts frame bit at specified Other DA table entry */ + omc_table_reg &= ~(0xff << (8 * reg_offset)); + } else { + omc_table_reg &= ~(0xff << (8 * reg_offset)); + omc_table_reg |= ((0x01 | (queue << 1)) << (8 * reg_offset)); + } + + mvreg_write(pp, MVNETA_DA_FILT_OTH_MCAST + tbl_offset, omc_table_reg); +} + +/* The network device supports multicast using two tables: + * 1) Special Multicast Table for MAC addresses of the form + * 0x01-00-5E-00-00-XX (where XX is between 0x00 and 0xFF). + * The MAC DA[7:0] bits are used as a pointer to the Special Multicast + * Table entries in the DA-Filter table. + * 2) Other Multicast Table for multicast of another type. A CRC-8 value + * is used as an index to the Other Multicast Table entries in the + * DA-Filter table. + */ +static int mvneta_mcast_addr_set(struct mvneta_port *pp, unsigned char *p_addr, + int queue) +{ + unsigned char crc_result = 0; + + if (memcmp(p_addr, "\x01\x00\x5e\x00\x00", 5) == 0) { + mvneta_set_special_mcast_addr(pp, p_addr[5], queue); + return 0; + } + + crc_result = mvneta_addr_crc(p_addr); + if (queue == -1) { + if (pp->mcast_count[crc_result] == 0) { + netdev_info(pp->dev, "No valid Mcast for crc8=0x%02x\n", + crc_result); + return -EINVAL; + } + + pp->mcast_count[crc_result]--; + if (pp->mcast_count[crc_result] != 0) { + netdev_info(pp->dev, + "After delete there are %d valid Mcast for crc8=0x%02x\n", + pp->mcast_count[crc_result], crc_result); + return -EINVAL; + } + } else + pp->mcast_count[crc_result]++; + + mvneta_set_other_mcast_addr(pp, crc_result, queue); + + return 0; +} + +/* Configure Fitering mode of Ethernet port */ +static void mvneta_rx_unicast_promisc_set(struct mvneta_port *pp, + int is_promisc) +{ + u32 port_cfg_reg, val; + + port_cfg_reg = mvreg_read(pp, MVNETA_PORT_CONFIG); + + val = mvreg_read(pp, MVNETA_TYPE_PRIO); + + /* Set / Clear UPM bit in port configuration register */ + if (is_promisc) { + /* Accept all Unicast addresses */ + port_cfg_reg |= MVNETA_UNI_PROMISC_MODE; + val |= MVNETA_FORCE_UNI; + mvreg_write(pp, MVNETA_MAC_ADDR_LOW, 0xffff); + mvreg_write(pp, MVNETA_MAC_ADDR_HIGH, 0xffffffff); + } else { + /* Reject all Unicast addresses */ + port_cfg_reg &= ~MVNETA_UNI_PROMISC_MODE; + val &= ~MVNETA_FORCE_UNI; + } + + mvreg_write(pp, MVNETA_PORT_CONFIG, port_cfg_reg); + mvreg_write(pp, MVNETA_TYPE_PRIO, val); +} + +/* register unicast and multicast addresses */ +static void mvneta_set_rx_mode(struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + struct netdev_hw_addr *ha; + + if (dev->flags & IFF_PROMISC) { + /* Accept all: Multicast + Unicast */ + mvneta_rx_unicast_promisc_set(pp, 1); + mvneta_set_ucast_table(pp, rxq_def); + mvneta_set_special_mcast_table(pp, rxq_def); + mvneta_set_other_mcast_table(pp, rxq_def); + } else { + /* Accept single Unicast */ + mvneta_rx_unicast_promisc_set(pp, 0); + mvneta_set_ucast_table(pp, -1); + mvneta_mac_addr_set(pp, dev->dev_addr, rxq_def); + + if (dev->flags & IFF_ALLMULTI) { + /* Accept all multicast */ + mvneta_set_special_mcast_table(pp, rxq_def); + mvneta_set_other_mcast_table(pp, rxq_def); + } else { + /* Accept only initialized multicast */ + mvneta_set_special_mcast_table(pp, -1); + mvneta_set_other_mcast_table(pp, -1); + + if (!netdev_mc_empty(dev)) { + netdev_for_each_mc_addr(ha, dev) { + mvneta_mcast_addr_set(pp, ha->addr, + rxq_def); + } + } + } + } +} + +/* Interrupt handling - the callback for request_irq() */ +static irqreturn_t mvneta_isr(int irq, void *dev_id) +{ + struct mvneta_port *pp = (struct mvneta_port *)dev_id; + + /* Mask all interrupts */ + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0); + + napi_schedule(&pp->napi); + + return IRQ_HANDLED; +} + +/* NAPI handler + * Bits 0 - 7 of the causeRxTx register indicate that are transmitted + * packets on the corresponding TXQ (Bit 0 is for TX queue 1). + * Bits 8 -15 of the cause Rx Tx register indicate that are received + * packets on the corresponding RXQ (Bit 8 is for RX queue 0). + * Each CPU has its own causeRxTx register + */ +static int mvneta_poll(struct napi_struct *napi, int budget) +{ + int rx_done = 0; + u32 cause_rx_tx; + unsigned long flags; + struct mvneta_port *pp = netdev_priv(napi->dev); + + if (!netif_running(pp->dev)) { + napi_complete(napi); + return rx_done; + } + + /* Read cause register */ + cause_rx_tx = mvreg_read(pp, MVNETA_INTR_NEW_CAUSE) & + MVNETA_RX_INTR_MASK(rxq_number); + + /* + * For the case where the last mvneta_poll did not process all + * RX packets + */ + cause_rx_tx |= pp->cause_rx_tx; + if (rxq_number > 1) { + while ((cause_rx_tx != 0) && (budget > 0)) { + int count; + struct mvneta_rx_queue *rxq; + /* get rx queue number from cause_rx_tx */ + rxq = mvneta_rx_policy(pp, cause_rx_tx); + if (!rxq) + break; + + /* process the packet in that rx queue */ + count = mvneta_rx(pp, budget, rxq); + rx_done += count; + budget -= count; + if (budget > 0) { + /* set off the rx bit of the corresponding bit + in the cause rx tx register, so that next + iteration will find the next rx queue where + packets are received on */ + cause_rx_tx &= ~((1 << rxq->id) << 8); + } + } + } else { + rx_done = mvneta_rx(pp, budget, &pp->rxqs[rxq_def]); + budget -= rx_done; + } + + if (budget > 0) { + cause_rx_tx = 0; + napi_complete(napi); + local_irq_save(flags); + mvreg_write(pp, MVNETA_INTR_NEW_MASK, + MVNETA_RX_INTR_MASK(rxq_number)); + local_irq_restore(flags); + } + + pp->cause_rx_tx = cause_rx_tx; + return rx_done; +} + +/* tx done timer callback */ +static void mvneta_tx_done_timer_callback(unsigned long data) +{ + struct net_device *dev = (struct net_device *)data; + struct mvneta_port *pp = netdev_priv(dev); + int tx_done = 0, tx_todo = 0; + + if (!netif_running(dev)) + return ; + + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + + tx_done = mvneta_tx_done_gbe(pp, + (((1 << txq_number) - 1) & + MVNETA_CAUSE_TXQ_SENT_DESC_ALL_MASK), + &tx_todo); + if (tx_todo > 0) + mvneta_add_tx_done_timer(pp); +} + +/* Handle rxq fill: allocates rxq skbs; called when initializing a port */ +static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, + int num) +{ + struct net_device *dev = pp->dev; + int i; + + for (i = 0; i < num; i++) { + struct sk_buff *skb; + struct mvneta_rx_desc *rx_desc; + unsigned long phys_addr; + + skb = dev_alloc_skb(pp->pkt_size); + if (!skb) { + netdev_err(dev, "%s:rxq %d, %d of %d buffs filled\n", + __func__, rxq->id, i, num); + break; + } + + rx_desc = rxq->descs + i; + memset(rx_desc, 0, sizeof(struct mvneta_rx_desc)); + phys_addr = dma_map_single(dev->dev.parent, skb->head, + MVNETA_RX_BUF_SIZE(pp->pkt_size), + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, phys_addr))) { + dev_kfree_skb(skb); + break; + } + + mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)skb); + } + + /* Add this number of RX descriptors as non occupied (ready to + get packets) */ + mvneta_rxq_non_occup_desc_add(pp, rxq, i); + + return i; +} + +/* Free all packets pending transmit from all TXQs and reset TX port */ +static void mvneta_tx_reset(struct mvneta_port *pp) +{ + int queue; + + /* free the skb's in the hal tx ring */ + for (queue = 0; queue < txq_number; queue++) + mvneta_txq_done_force(pp, &pp->txqs[queue]); + + mvreg_write(pp, MVNETA_PORT_TX_RESET, MVNETA_PORT_TX_DMA_RESET); + mvreg_write(pp, MVNETA_PORT_TX_RESET, 0); +} + +static void mvneta_rx_reset(struct mvneta_port *pp) +{ + mvreg_write(pp, MVNETA_PORT_RX_RESET, MVNETA_PORT_RX_DMA_RESET); + mvreg_write(pp, MVNETA_PORT_RX_RESET, 0); +} + +/* Rx/Tx queue initialization/cleanup methods */ + +/* Create a specified RX queue */ +static int mvneta_rxq_init(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq) + +{ + rxq->size = pp->rx_ring_size; + + /* Allocate memory for RX descriptors */ + rxq->descs = dma_alloc_coherent(pp->dev->dev.parent, + rxq->size * MVNETA_DESC_ALIGNED_SIZE, + &rxq->descs_phys, GFP_KERNEL); + if (rxq->descs == NULL) { + netdev_err(pp->dev, + "rxq=%d: Can't allocate %d bytes for %d RX descr\n", + rxq->id, rxq->size * MVNETA_DESC_ALIGNED_SIZE, + rxq->size); + return -ENOMEM; + } + + BUG_ON(rxq->descs != + PTR_ALIGN(rxq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE)); + + rxq->last_desc = rxq->size - 1; + + /* Set Rx descriptors queue starting address */ + mvreg_write(pp, MVNETA_RXQ_BASE_ADDR_REG(rxq->id), rxq->descs_phys); + mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size); + + /* Set Offset */ + mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD); + + /* Set coalescing pkts and time */ + mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal); + mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal); + + /* Fill RXQ with buffers from RX pool */ + mvneta_rxq_buf_size_set(pp, rxq, MVNETA_RX_BUF_SIZE(pp->pkt_size)); + mvneta_rxq_bm_disable(pp, rxq); + mvneta_rxq_fill(pp, rxq, rxq->size); + + return 0; +} + +/* Cleanup Rx queue */ +static void mvneta_rxq_deinit(struct mvneta_port *pp, + struct mvneta_rx_queue *rxq) +{ + mvneta_rxq_drop_pkts(pp, rxq); + + if (rxq->descs) + dma_free_coherent(pp->dev->dev.parent, + rxq->size * MVNETA_DESC_ALIGNED_SIZE, + rxq->descs, + rxq->descs_phys); + + rxq->descs = NULL; + rxq->last_desc = 0; + rxq->next_desc_to_proc = 0; + rxq->descs_phys = 0; +} + +/* Create and initialize a tx queue */ +static int mvneta_txq_init(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) +{ + txq->size = pp->tx_ring_size; + + /* Allocate memory for TX descriptors */ + txq->descs = dma_alloc_coherent(pp->dev->dev.parent, + txq->size * MVNETA_DESC_ALIGNED_SIZE, + &txq->descs_phys, GFP_KERNEL); + if (txq->descs == NULL) { + netdev_err(pp->dev, + "txQ=%d: Can't allocate %d bytes for %d TX descr\n", + txq->id, txq->size * MVNETA_DESC_ALIGNED_SIZE, + txq->size); + return -ENOMEM; + } + + /* Make sure descriptor address is cache line size aligned */ + BUG_ON(txq->descs != + PTR_ALIGN(txq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE)); + + txq->last_desc = txq->size - 1; + + /* Set maximum bandwidth for enabled TXQs */ + mvreg_write(pp, MVETH_TXQ_TOKEN_CFG_REG(txq->id), 0x03ffffff); + mvreg_write(pp, MVETH_TXQ_TOKEN_COUNT_REG(txq->id), 0x3fffffff); + + /* Set Tx descriptors queue starting address */ + mvreg_write(pp, MVNETA_TXQ_BASE_ADDR_REG(txq->id), txq->descs_phys); + mvreg_write(pp, MVNETA_TXQ_SIZE_REG(txq->id), txq->size); + + txq->tx_skb = kmalloc(txq->size * sizeof(*txq->tx_skb), GFP_KERNEL); + if (txq->tx_skb == NULL) { + dma_free_coherent(pp->dev->dev.parent, + txq->size * MVNETA_DESC_ALIGNED_SIZE, + txq->descs, txq->descs_phys); + return -ENOMEM; + } + mvneta_tx_done_pkts_coal_set(pp, txq, txq->done_pkts_coal); + + return 0; +} + +/* Free allocated resources when mvneta_txq_init() fails to allocate memory*/ +static void mvneta_txq_deinit(struct mvneta_port *pp, + struct mvneta_tx_queue *txq) +{ + kfree(txq->tx_skb); + + if (txq->descs) + dma_free_coherent(pp->dev->dev.parent, + txq->size * MVNETA_DESC_ALIGNED_SIZE, + txq->descs, txq->descs_phys); + + txq->descs = NULL; + txq->last_desc = 0; + txq->next_desc_to_proc = 0; + txq->descs_phys = 0; + + /* Set minimum bandwidth for disabled TXQs */ + mvreg_write(pp, MVETH_TXQ_TOKEN_CFG_REG(txq->id), 0); + mvreg_write(pp, MVETH_TXQ_TOKEN_COUNT_REG(txq->id), 0); + + /* Set Tx descriptors queue starting address and size */ + mvreg_write(pp, MVNETA_TXQ_BASE_ADDR_REG(txq->id), 0); + mvreg_write(pp, MVNETA_TXQ_SIZE_REG(txq->id), 0); +} + +/* Cleanup all Tx queues */ +static void mvneta_cleanup_txqs(struct mvneta_port *pp) +{ + int queue; + + for (queue = 0; queue < txq_number; queue++) + mvneta_txq_deinit(pp, &pp->txqs[queue]); +} + +/* Cleanup all Rx queues */ +static void mvneta_cleanup_rxqs(struct mvneta_port *pp) +{ + int queue; + + for (queue = 0; queue < rxq_number; queue++) + mvneta_rxq_deinit(pp, &pp->rxqs[queue]); +} + + +/* Init all Rx queues */ +static int mvneta_setup_rxqs(struct mvneta_port *pp) +{ + int queue; + + for (queue = 0; queue < rxq_number; queue++) { + int err = mvneta_rxq_init(pp, &pp->rxqs[queue]); + if (err) { + netdev_err(pp->dev, "%s: can't create rxq=%d\n", + __func__, queue); + mvneta_cleanup_rxqs(pp); + return err; + } + } + + return 0; +} + +/* Init all tx queues */ +static int mvneta_setup_txqs(struct mvneta_port *pp) +{ + int queue; + + for (queue = 0; queue < txq_number; queue++) { + int err = mvneta_txq_init(pp, &pp->txqs[queue]); + if (err) { + netdev_err(pp->dev, "%s: can't create txq=%d\n", + __func__, queue); + mvneta_cleanup_txqs(pp); + return err; + } + } + + return 0; +} + +static void mvneta_start_dev(struct mvneta_port *pp) +{ + mvneta_max_rx_size_set(pp, pp->pkt_size); + mvneta_txq_max_tx_size_set(pp, pp->pkt_size); + + /* start the Rx/Tx activity */ + mvneta_port_enable(pp); + + /* Enable polling on the port */ + napi_enable(&pp->napi); + + /* Unmask interrupts */ + mvreg_write(pp, MVNETA_INTR_NEW_MASK, + MVNETA_RX_INTR_MASK(rxq_number)); + + phy_start(pp->phy_dev); + netif_tx_start_all_queues(pp->dev); +} + +static void mvneta_stop_dev(struct mvneta_port *pp) +{ + phy_stop(pp->phy_dev); + + napi_disable(&pp->napi); + + netif_carrier_off(pp->dev); + + mvneta_port_down(pp); + netif_tx_stop_all_queues(pp->dev); + + /* Stop the port activity */ + mvneta_port_disable(pp); + + /* Clear all ethernet port interrupts */ + mvreg_write(pp, MVNETA_INTR_MISC_CAUSE, 0); + mvreg_write(pp, MVNETA_INTR_OLD_CAUSE, 0); + + /* Mask all ethernet port interrupts */ + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0); + mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0); + mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0); + + mvneta_tx_reset(pp); + mvneta_rx_reset(pp); +} + +/* tx timeout callback - display a message and stop/start the network device */ +static void mvneta_tx_timeout(struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + + netdev_info(dev, "tx timeout\n"); + mvneta_stop_dev(pp); + mvneta_start_dev(pp); +} + +/* Return positive if MTU is valid */ +static int mvneta_check_mtu_valid(struct net_device *dev, int mtu) +{ + if (mtu < 68) { + netdev_err(dev, "cannot change mtu to less than 68\n"); + return -EINVAL; + } + + /* 9676 == 9700 - 20 and rounding to 8 */ + if (mtu > 9676) { + netdev_info(dev, "Illegal MTU value %d, round to 9676\n", mtu); + mtu = 9676; + } + + if (!IS_ALIGNED(MVNETA_RX_PKT_SIZE(mtu), 8)) { + netdev_info(dev, "Illegal MTU value %d, rounding to %d\n", + mtu, ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8)); + mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8); + } + + return mtu; +} + +/* Change the device mtu */ +static int mvneta_change_mtu(struct net_device *dev, int mtu) +{ + struct mvneta_port *pp = netdev_priv(dev); + int ret; + + mtu = mvneta_check_mtu_valid(dev, mtu); + if (mtu < 0) + return -EINVAL; + + dev->mtu = mtu; + + if (!netif_running(dev)) + return 0; + + /* + * The interface is running, so we have to force a + * reallocation of the RXQs + */ + mvneta_stop_dev(pp); + + mvneta_cleanup_txqs(pp); + mvneta_cleanup_rxqs(pp); + + pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu); + + ret = mvneta_setup_rxqs(pp); + if (ret) { + netdev_err(pp->dev, "unable to setup rxqs after MTU change\n"); + return ret; + } + + mvneta_setup_txqs(pp); + + mvneta_start_dev(pp); + mvneta_port_up(pp); + + return 0; +} + +/* Handle setting mac address */ +static int mvneta_set_mac_addr(struct net_device *dev, void *addr) +{ + struct mvneta_port *pp = netdev_priv(dev); + u8 *mac = addr + 2; + int i; + + if (netif_running(dev)) + return -EBUSY; + + /* Remove previous address table entry */ + mvneta_mac_addr_set(pp, dev->dev_addr, -1); + + /* Set new addr in hw */ + mvneta_mac_addr_set(pp, mac, rxq_def); + + /* Set addr in the device */ + for (i = 0; i < ETH_ALEN; i++) + dev->dev_addr[i] = mac[i]; + + return 0; +} + +static void mvneta_adjust_link(struct net_device *ndev) +{ + struct mvneta_port *pp = netdev_priv(ndev); + struct phy_device *phydev = pp->phy_dev; + int status_change = 0; + + if (phydev->link) { + if ((pp->speed != phydev->speed) || + (pp->duplex != phydev->duplex)) { + u32 val; + + val = mvreg_read(pp, MVNETA_GMAC_AUTONEG_CONFIG); + val &= ~(MVNETA_GMAC_CONFIG_MII_SPEED | + MVNETA_GMAC_CONFIG_GMII_SPEED | + MVNETA_GMAC_CONFIG_FULL_DUPLEX); + + if (phydev->duplex) + val |= MVNETA_GMAC_CONFIG_FULL_DUPLEX; + + if (phydev->speed == SPEED_1000) + val |= MVNETA_GMAC_CONFIG_GMII_SPEED; + else + val |= MVNETA_GMAC_CONFIG_MII_SPEED; + + mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); + + pp->duplex = phydev->duplex; + pp->speed = phydev->speed; + } + } + + if (phydev->link != pp->link) { + if (!phydev->link) { + pp->duplex = -1; + pp->speed = 0; + } + + pp->link = phydev->link; + status_change = 1; + } + + if (status_change) { + if (phydev->link) { + u32 val = mvreg_read(pp, MVNETA_GMAC_AUTONEG_CONFIG); + val |= (MVNETA_GMAC_FORCE_LINK_PASS | + MVNETA_GMAC_FORCE_LINK_DOWN); + mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); + mvneta_port_up(pp); + netdev_info(pp->dev, "link up\n"); + } else { + mvneta_port_down(pp); + netdev_info(pp->dev, "link down\n"); + } + } +} + +static int mvneta_mdio_probe(struct mvneta_port *pp) +{ + struct phy_device *phy_dev; + + phy_dev = of_phy_connect(pp->dev, pp->phy_node, mvneta_adjust_link, 0, + pp->phy_interface); + if (!phy_dev) { + netdev_err(pp->dev, "could not find the PHY\n"); + return -ENODEV; + } + + phy_dev->supported &= PHY_GBIT_FEATURES; + phy_dev->advertising = phy_dev->supported; + + pp->phy_dev = phy_dev; + pp->link = 0; + pp->duplex = 0; + pp->speed = 0; + + return 0; +} + +static void mvneta_mdio_remove(struct mvneta_port *pp) +{ + phy_disconnect(pp->phy_dev); + pp->phy_dev = NULL; +} + +static int mvneta_open(struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + int ret; + + mvneta_mac_addr_set(pp, dev->dev_addr, rxq_def); + + pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu); + + ret = mvneta_setup_rxqs(pp); + if (ret) + return ret; + + ret = mvneta_setup_txqs(pp); + if (ret) + goto err_cleanup_rxqs; + + /* Connect to port interrupt line */ + ret = request_irq(pp->dev->irq, mvneta_isr, 0, + MVNETA_DRIVER_NAME, pp); + if (ret) { + netdev_err(pp->dev, "cannot request irq %d\n", pp->dev->irq); + goto err_cleanup_txqs; + } + + /* In default link is down */ + netif_carrier_off(pp->dev); + + ret = mvneta_mdio_probe(pp); + if (ret < 0) { + netdev_err(dev, "cannot probe MDIO bus\n"); + goto err_free_irq; + } + + mvneta_start_dev(pp); + + return 0; + +err_free_irq: + free_irq(pp->dev->irq, pp); +err_cleanup_txqs: + mvneta_cleanup_txqs(pp); +err_cleanup_rxqs: + mvneta_cleanup_rxqs(pp); + return ret; +} + +/* Stop the port, free port interrupt line */ +static int mvneta_stop(struct net_device *dev) +{ + struct mvneta_port *pp = netdev_priv(dev); + + mvneta_stop_dev(pp); + mvneta_mdio_remove(pp); + free_irq(dev->irq, pp); + mvneta_cleanup_rxqs(pp); + mvneta_cleanup_txqs(pp); + del_timer(&pp->tx_done_timer); + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + + return 0; +} + +/* Ethtool methods */ + +/* Get settings (phy address, speed) for ethtools */ +int mvneta_ethtool_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) +{ + struct mvneta_port *pp = netdev_priv(dev); + + if (!pp->phy_dev) + return -ENODEV; + + return phy_ethtool_gset(pp->phy_dev, cmd); +} + +/* Set settings (phy address, speed) for ethtools */ +int mvneta_ethtool_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) +{ + struct mvneta_port *pp = netdev_priv(dev); + + if (!pp->phy_dev) + return -ENODEV; + + return phy_ethtool_sset(pp->phy_dev, cmd); +} + +/* Set interrupt coalescing for ethtools */ +static int mvneta_ethtool_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *c) +{ + struct mvneta_port *pp = netdev_priv(dev); + int queue; + + for (queue = 0; queue < rxq_number; queue++) { + struct mvneta_rx_queue *rxq = &pp->rxqs[queue]; + rxq->time_coal = c->rx_coalesce_usecs; + rxq->pkts_coal = c->rx_max_coalesced_frames; + mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal); + mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal); + } + + for (queue = 0; queue < txq_number; queue++) { + struct mvneta_tx_queue *txq = &pp->txqs[queue]; + txq->done_pkts_coal = c->tx_max_coalesced_frames; + mvneta_tx_done_pkts_coal_set(pp, txq, txq->done_pkts_coal); + } + + return 0; +} + +/* get coalescing for ethtools */ +static int mvneta_ethtool_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *c) +{ + struct mvneta_port *pp = netdev_priv(dev); + + c->rx_coalesce_usecs = pp->rxqs[0].time_coal; + c->rx_max_coalesced_frames = pp->rxqs[0].pkts_coal; + + c->tx_max_coalesced_frames = pp->txqs[0].done_pkts_coal; + return 0; +} + + +static void mvneta_ethtool_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *drvinfo) +{ + strlcpy(drvinfo->driver, MVNETA_DRIVER_NAME, + sizeof(drvinfo->driver)); + strlcpy(drvinfo->version, MVNETA_DRIVER_VERSION, + sizeof(drvinfo->version)); + strlcpy(drvinfo->bus_info, dev_name(&dev->dev), + sizeof(drvinfo->bus_info)); +} + + +static void mvneta_ethtool_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct mvneta_port *pp = netdev_priv(netdev); + + ring->rx_max_pending = MVNETA_MAX_RXD; + ring->tx_max_pending = MVNETA_MAX_TXD; + ring->rx_pending = pp->rx_ring_size; + ring->tx_pending = pp->tx_ring_size; +} + +static int mvneta_ethtool_set_ringparam(struct net_device *dev, + struct ethtool_ringparam *ring) +{ + struct mvneta_port *pp = netdev_priv(dev); + + if ((ring->rx_pending == 0) || (ring->tx_pending == 0)) + return -EINVAL; + pp->rx_ring_size = ring->rx_pending < MVNETA_MAX_RXD ? + ring->rx_pending : MVNETA_MAX_RXD; + pp->tx_ring_size = ring->tx_pending < MVNETA_MAX_TXD ? + ring->tx_pending : MVNETA_MAX_TXD; + + if (netif_running(dev)) { + mvneta_stop(dev); + if (mvneta_open(dev)) { + netdev_err(dev, + "error on opening device after ring param change\n"); + return -ENOMEM; + } + } + + return 0; +} + +static const struct net_device_ops mvneta_netdev_ops = { + .ndo_open = mvneta_open, + .ndo_stop = mvneta_stop, + .ndo_start_xmit = mvneta_tx, + .ndo_set_rx_mode = mvneta_set_rx_mode, + .ndo_set_mac_address = mvneta_set_mac_addr, + .ndo_change_mtu = mvneta_change_mtu, + .ndo_tx_timeout = mvneta_tx_timeout, + .ndo_get_stats64 = mvneta_get_stats64, +}; + +const struct ethtool_ops mvneta_eth_tool_ops = { + .get_link = ethtool_op_get_link, + .get_settings = mvneta_ethtool_get_settings, + .set_settings = mvneta_ethtool_set_settings, + .set_coalesce = mvneta_ethtool_set_coalesce, + .get_coalesce = mvneta_ethtool_get_coalesce, + .get_drvinfo = mvneta_ethtool_get_drvinfo, + .get_ringparam = mvneta_ethtool_get_ringparam, + .set_ringparam = mvneta_ethtool_set_ringparam, +}; + +/* Initialize hw */ +static int __devinit mvneta_init(struct mvneta_port *pp, int phy_addr) +{ + int queue; + + /* Disable port */ + mvneta_port_disable(pp); + + /* Set port default values */ + mvneta_defaults_set(pp); + + pp->txqs = kzalloc(txq_number * sizeof(struct mvneta_tx_queue), + GFP_KERNEL); + if (!pp->txqs) + return -ENOMEM; + + /* Initialize TX descriptor rings */ + for (queue = 0; queue < txq_number; queue++) { + struct mvneta_tx_queue *txq = &pp->txqs[queue]; + txq->id = queue; + txq->size = pp->tx_ring_size; + txq->done_pkts_coal = MVNETA_TXDONE_COAL_PKTS; + } + + pp->rxqs = kzalloc(rxq_number * sizeof(struct mvneta_rx_queue), + GFP_KERNEL); + if (!pp->rxqs) { + kfree(pp->txqs); + return -ENOMEM; + } + + /* Create Rx descriptor rings */ + for (queue = 0; queue < rxq_number; queue++) { + struct mvneta_rx_queue *rxq = &pp->rxqs[queue]; + rxq->id = queue; + rxq->size = pp->rx_ring_size; + rxq->pkts_coal = MVNETA_RX_COAL_PKTS; + rxq->time_coal = MVNETA_RX_COAL_USEC; + } + + return 0; +} + +static void __devexit mvneta_deinit(struct mvneta_port *pp) +{ + kfree(pp->txqs); + kfree(pp->rxqs); +} + +/* platform glue : initialize decoding windows */ +static void __devinit +mvneta_conf_mbus_windows(struct mvneta_port *pp, + const struct mbus_dram_target_info *dram) +{ + u32 win_enable; + u32 win_protect; + int i; + + for (i = 0; i < 6; i++) { + mvreg_write(pp, MVNETA_WIN_BASE(i), 0); + mvreg_write(pp, MVNETA_WIN_SIZE(i), 0); + + if (i < 4) + mvreg_write(pp, MVNETA_WIN_REMAP(i), 0); + } + + win_enable = 0x3f; + win_protect = 0; + + for (i = 0; i < dram->num_cs; i++) { + const struct mbus_dram_window *cs = dram->cs + i; + mvreg_write(pp, MVNETA_WIN_BASE(i), (cs->base & 0xffff0000) | + (cs->mbus_attr << 8) | dram->mbus_dram_target_id); + + mvreg_write(pp, MVNETA_WIN_SIZE(i), + (cs->size - 1) & 0xffff0000); + + win_enable &= ~(1 << i); + win_protect |= 3 << (2 * i); + } + + mvreg_write(pp, MVNETA_BASE_ADDR_ENABLE, win_enable); +} + +/* Power up the port */ +static void __devinit mvneta_port_power_up(struct mvneta_port *pp, int phy_mode) +{ + u32 val; + + /* MAC Cause register should be cleared */ + mvreg_write(pp, MVNETA_UNIT_INTR_CAUSE, 0); + + if (phy_mode == PHY_INTERFACE_MODE_SGMII) + mvneta_port_sgmii_config(pp); + + mvneta_gmac_rgmii_set(pp, 1); + + /* Cancel Port Reset */ + val = mvreg_read(pp, MVNETA_GMAC_CTRL_2); + val &= ~MVNETA_GMAC2_PORT_RESET; + mvreg_write(pp, MVNETA_GMAC_CTRL_2, val); + + while ((mvreg_read(pp, MVNETA_GMAC_CTRL_2) & + MVNETA_GMAC2_PORT_RESET) != 0) + continue; +} + +/* Device initialization routine */ +static int __devinit mvneta_probe(struct platform_device *pdev) +{ + const struct mbus_dram_target_info *dram_target_info; + struct device_node *dn = pdev->dev.of_node; + struct device_node *phy_node; + u32 phy_addr, clk_rate_hz; + struct mvneta_port *pp; + struct net_device *dev; + const char *mac_addr; + int phy_mode; + int err; + + /* + * Our multiqueue support is not complete, so for now, only + * allow the usage of the first RX queue + */ + if (rxq_def != 0) { + dev_err(&pdev->dev, "Invalid rxq_def argument: %d\n", rxq_def); + return -EINVAL; + } + + dev = alloc_etherdev_mq(sizeof(struct mvneta_port), 8); + if (!dev) + return -ENOMEM; + + dev->irq = irq_of_parse_and_map(dn, 0); + if (dev->irq == 0) { + err = -EINVAL; + goto err_free_netdev; + } + + phy_node = of_parse_phandle(dn, "phy", 0); + if (!phy_node) { + dev_err(&pdev->dev, "no associated PHY\n"); + err = -ENODEV; + goto err_free_irq; + } + + phy_mode = of_get_phy_mode(dn); + if (phy_mode < 0) { + dev_err(&pdev->dev, "incorrect phy-mode\n"); + err = -EINVAL; + goto err_free_irq; + } + + if (of_property_read_u32(dn, "clock-frequency", &clk_rate_hz) != 0) { + dev_err(&pdev->dev, "could not read clock-frequency\n"); + err = -EINVAL; + goto err_free_irq; + } + + mac_addr = of_get_mac_address(dn); + + if (!mac_addr || !is_valid_ether_addr(mac_addr)) + eth_hw_addr_random(dev); + else + memcpy(dev->dev_addr, mac_addr, ETH_ALEN); + + dev->tx_queue_len = MVNETA_MAX_TXD; + dev->watchdog_timeo = 5 * HZ; + dev->netdev_ops = &mvneta_netdev_ops; + + SET_ETHTOOL_OPS(dev, &mvneta_eth_tool_ops); + + pp = netdev_priv(dev); + + pp->tx_done_timer.function = mvneta_tx_done_timer_callback; + init_timer(&pp->tx_done_timer); + clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + + pp->weight = MVNETA_RX_POLL_WEIGHT; + pp->clk_rate_hz = clk_rate_hz; + pp->phy_node = phy_node; + pp->phy_interface = phy_mode; + + pp->base = of_iomap(dn, 0); + if (pp->base == NULL) { + err = -ENOMEM; + goto err_free_irq; + } + + pp->tx_done_timer.data = (unsigned long)dev; + + pp->tx_ring_size = MVNETA_MAX_TXD; + pp->rx_ring_size = MVNETA_MAX_RXD; + + pp->dev = dev; + SET_NETDEV_DEV(dev, &pdev->dev); + + err = mvneta_init(pp, phy_addr); + if (err < 0) { + dev_err(&pdev->dev, "can't init eth hal\n"); + goto err_unmap; + } + mvneta_port_power_up(pp, phy_mode); + + dram_target_info = mv_mbus_dram_info(); + if (dram_target_info) + mvneta_conf_mbus_windows(pp, dram_target_info); + + netif_napi_add(dev, &pp->napi, mvneta_poll, pp->weight); + + err = register_netdev(dev); + if (err < 0) { + dev_err(&pdev->dev, "failed to register\n"); + goto err_deinit; + } + + dev->features = NETIF_F_SG | NETIF_F_IP_CSUM; + dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM; + dev->priv_flags |= IFF_UNICAST_FLT; + + netdev_info(dev, "mac: %pM\n", dev->dev_addr); + + platform_set_drvdata(pdev, pp->dev); + + return 0; + +err_deinit: + mvneta_deinit(pp); +err_unmap: + iounmap(pp->base); +err_free_irq: + irq_dispose_mapping(dev->irq); +err_free_netdev: + free_netdev(dev); + return err; +} + +/* Device removal routine */ +static int __devexit mvneta_remove(struct platform_device *pdev) +{ + struct net_device *dev = platform_get_drvdata(pdev); + struct mvneta_port *pp = netdev_priv(dev); + + unregister_netdev(dev); + mvneta_deinit(pp); + iounmap(pp->base); + irq_dispose_mapping(dev->irq); + free_netdev(dev); + + platform_set_drvdata(pdev, NULL); + + return 0; +} + +static const struct of_device_id mvneta_match[] = { + { .compatible = "marvell,armada-370-neta" }, + { } +}; +MODULE_DEVICE_TABLE(of, mvneta_match); + +static struct platform_driver mvneta_driver = { + .probe = mvneta_probe, + .remove = __devexit_p(mvneta_remove), + .driver = { + .name = MVNETA_DRIVER_NAME, + .of_match_table = mvneta_match, + }, +}; + +module_platform_driver(mvneta_driver); + +MODULE_DESCRIPTION("Marvell NETA Ethernet Driver - www.marvell.com"); +MODULE_AUTHOR("Rami Rosen , Thomas Petazzoni "); +MODULE_LICENSE("GPL"); + +module_param(rxq_number, int, S_IRUGO); +module_param(txq_number, int, S_IRUGO); + +module_param(rxq_def, int, S_IRUGO); +module_param(txq_def, int, S_IRUGO); -- cgit v1.2.3 From 97fa4cf442ff2872000d9110686371775795a32b Mon Sep 17 00:00:00 2001 From: Sebastian Hesselbarth Date: Sat, 17 Nov 2012 15:22:22 +0100 Subject: clk: mvebu: add mvebu core clocks. This driver allows to provide DT clocks for core clocks found on Marvell Kirkwood, Dove & 370/XP SoCs. The core clock frequencies and ratios are determined by decoding the Sample-At-Reset registers. Although technically correct, using a divider of 0 will lead to div_by_zero panic. Let's use a ratio of 0/1 instead to fail later with a zero clock. Signed-off-by: Gregory CLEMENT Signed-off-by: Sebastian Hesselbarth Signed-off-by: Andrew Lunn Tested-by Gregory CLEMENT --- .../devicetree/bindings/clock/mvebu-core-clock.txt | 47 ++ drivers/clk/Kconfig | 2 + drivers/clk/Makefile | 1 + drivers/clk/mvebu/Kconfig | 3 + drivers/clk/mvebu/Makefile | 1 + drivers/clk/mvebu/clk-core.c | 675 +++++++++++++++++++++ drivers/clk/mvebu/clk-core.h | 18 + drivers/clk/mvebu/clk.c | 23 + include/linux/clk/mvebu.h | 22 + 9 files changed, 792 insertions(+) create mode 100644 Documentation/devicetree/bindings/clock/mvebu-core-clock.txt create mode 100644 drivers/clk/mvebu/Kconfig create mode 100644 drivers/clk/mvebu/Makefile create mode 100644 drivers/clk/mvebu/clk-core.c create mode 100644 drivers/clk/mvebu/clk-core.h create mode 100644 drivers/clk/mvebu/clk.c create mode 100644 include/linux/clk/mvebu.h (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt new file mode 100644 index 000000000000..1e662948661e --- /dev/null +++ b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt @@ -0,0 +1,47 @@ +* Core Clock bindings for Marvell MVEBU SoCs + +Marvell MVEBU SoCs usually allow to determine core clock frequencies by +reading the Sample-At-Reset (SAR) register. The core clock consumer should +specify the desired clock by having the clock ID in its "clocks" phandle cell. + +The following is a list of provided IDs and clock names on Armada 370/XP: + 0 = tclk (Internal Bus clock) + 1 = cpuclk (CPU clock) + 2 = nbclk (L2 Cache clock) + 3 = hclk (DRAM control clock) + 4 = dramclk (DDR clock) + +The following is a list of provided IDs and clock names on Kirkwood and Dove: + 0 = tclk (Internal Bus clock) + 1 = cpuclk (CPU0 clock) + 2 = l2clk (L2 Cache clock derived from CPU0 clock) + 3 = ddrclk (DDR controller clock derived from CPU0 clock) + +Required properties: +- compatible : shall be one of the following: + "marvell,armada-370-core-clock" - For Armada 370 SoC core clocks + "marvell,armada-xp-core-clock" - For Armada XP SoC core clocks + "marvell,dove-core-clock" - for Dove SoC core clocks + "marvell,kirkwood-core-clock" - for Kirkwood SoC (except mv88f6180) + "marvell,mv88f6180-core-clock" - for Kirkwood MV88f6180 SoC +- reg : shall be the register address of the Sample-At-Reset (SAR) register +- #clock-cells : from common clock binding; shall be set to 1 + +Optional properties: +- clock-output-names : from common clock binding; allows overwrite default clock + output names ("tclk", "cpuclk", "l2clk", "ddrclk") + +Example: + +core_clk: core-clocks@d0214 { + compatible = "marvell,dove-core-clock"; + reg = <0xd0214 0x4>; + #clock-cells = <1>; +}; + +spi0: spi@10600 { + compatible = "marvell,orion-spi"; + /* ... */ + /* get tclk from core clock provider */ + clocks = <&core_clk 0>; +}; diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig index bace9e98f75d..60427c0d23e6 100644 --- a/drivers/clk/Kconfig +++ b/drivers/clk/Kconfig @@ -54,3 +54,5 @@ config COMMON_CLK_MAX77686 This driver supports Maxim 77686 crystal oscillator clock. endmenu + +source "drivers/clk/mvebu/Kconfig" diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile index 71a25b91de00..d0a14ae8d49c 100644 --- a/drivers/clk/Makefile +++ b/drivers/clk/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_PLAT_SPEAR) += spear/ obj-$(CONFIG_ARCH_U300) += clk-u300.o obj-$(CONFIG_COMMON_CLK_VERSATILE) += versatile/ obj-$(CONFIG_ARCH_PRIMA2) += clk-prima2.o +obj-$(CONFIG_PLAT_ORION) += mvebu/ ifeq ($(CONFIG_COMMON_CLK), y) obj-$(CONFIG_ARCH_MMP) += mmp/ endif diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig new file mode 100644 index 000000000000..fd7bf97ff74d --- /dev/null +++ b/drivers/clk/mvebu/Kconfig @@ -0,0 +1,3 @@ +config MVEBU_CLK_CORE + bool + diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile new file mode 100644 index 000000000000..de1d9617f75a --- /dev/null +++ b/drivers/clk/mvebu/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_MVEBU_CLK_CORE) += clk.o clk-core.o diff --git a/drivers/clk/mvebu/clk-core.c b/drivers/clk/mvebu/clk-core.c new file mode 100644 index 000000000000..69056a7479e8 --- /dev/null +++ b/drivers/clk/mvebu/clk-core.c @@ -0,0 +1,675 @@ +/* + * Marvell EBU clock core handling defined at reset + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * Sebastian Hesselbarth + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ +#include +#include +#include +#include +#include +#include +#include +#include "clk-core.h" + +struct core_ratio { + int id; + const char *name; +}; + +struct core_clocks { + u32 (*get_tclk_freq)(void __iomem *sar); + u32 (*get_cpu_freq)(void __iomem *sar); + void (*get_clk_ratio)(void __iomem *sar, int id, int *mult, int *div); + const struct core_ratio *ratios; + int num_ratios; +}; + +static struct clk_onecell_data clk_data; + +static void __init mvebu_clk_core_setup(struct device_node *np, + struct core_clocks *coreclk) +{ + const char *tclk_name = "tclk"; + const char *cpuclk_name = "cpuclk"; + void __iomem *base; + unsigned long rate; + int n; + + base = of_iomap(np, 0); + if (WARN_ON(!base)) + return; + + /* + * Allocate struct for TCLK, cpu clk, and core ratio clocks + */ + clk_data.clk_num = 2 + coreclk->num_ratios; + clk_data.clks = kzalloc(clk_data.clk_num * sizeof(struct clk *), + GFP_KERNEL); + if (WARN_ON(!clk_data.clks)) + return; + + /* + * Register TCLK + */ + of_property_read_string_index(np, "clock-output-names", 0, + &tclk_name); + rate = coreclk->get_tclk_freq(base); + clk_data.clks[0] = clk_register_fixed_rate(NULL, tclk_name, NULL, + CLK_IS_ROOT, rate); + WARN_ON(IS_ERR(clk_data.clks[0])); + + /* + * Register CPU clock + */ + of_property_read_string_index(np, "clock-output-names", 1, + &cpuclk_name); + rate = coreclk->get_cpu_freq(base); + clk_data.clks[1] = clk_register_fixed_rate(NULL, cpuclk_name, NULL, + CLK_IS_ROOT, rate); + WARN_ON(IS_ERR(clk_data.clks[1])); + + /* + * Register fixed-factor clocks derived from CPU clock + */ + for (n = 0; n < coreclk->num_ratios; n++) { + const char *rclk_name = coreclk->ratios[n].name; + int mult, div; + + of_property_read_string_index(np, "clock-output-names", + 2+n, &rclk_name); + coreclk->get_clk_ratio(base, coreclk->ratios[n].id, + &mult, &div); + clk_data.clks[2+n] = clk_register_fixed_factor(NULL, rclk_name, + cpuclk_name, 0, mult, div); + WARN_ON(IS_ERR(clk_data.clks[2+n])); + }; + + /* + * SAR register isn't needed anymore + */ + iounmap(base); + + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); +} + +#ifdef CONFIG_MACH_ARMADA_370_XP +/* + * Armada 370/XP Sample At Reset is a 64 bit bitfiled split in two + * register of 32 bits + */ + +#define SARL 0 /* Low part [0:31] */ +#define SARL_AXP_PCLK_FREQ_OPT 21 +#define SARL_AXP_PCLK_FREQ_OPT_MASK 0x7 +#define SARL_A370_PCLK_FREQ_OPT 11 +#define SARL_A370_PCLK_FREQ_OPT_MASK 0xF +#define SARL_AXP_FAB_FREQ_OPT 24 +#define SARL_AXP_FAB_FREQ_OPT_MASK 0xF +#define SARL_A370_FAB_FREQ_OPT 15 +#define SARL_A370_FAB_FREQ_OPT_MASK 0x1F +#define SARL_A370_TCLK_FREQ_OPT 20 +#define SARL_A370_TCLK_FREQ_OPT_MASK 0x1 +#define SARH 4 /* High part [32:63] */ +#define SARH_AXP_PCLK_FREQ_OPT (52-32) +#define SARH_AXP_PCLK_FREQ_OPT_MASK 0x1 +#define SARH_AXP_PCLK_FREQ_OPT_SHIFT 3 +#define SARH_AXP_FAB_FREQ_OPT (51-32) +#define SARH_AXP_FAB_FREQ_OPT_MASK 0x1 +#define SARH_AXP_FAB_FREQ_OPT_SHIFT 4 + +static const u32 __initconst armada_370_tclk_frequencies[] = { + 16600000, + 20000000, +}; + +static u32 __init armada_370_get_tclk_freq(void __iomem *sar) +{ + u8 tclk_freq_select = 0; + + tclk_freq_select = ((readl(sar) >> SARL_A370_TCLK_FREQ_OPT) & + SARL_A370_TCLK_FREQ_OPT_MASK); + return armada_370_tclk_frequencies[tclk_freq_select]; +} + +static const u32 __initconst armada_370_cpu_frequencies[] = { + 400000000, + 533000000, + 667000000, + 800000000, + 1000000000, + 1067000000, + 1200000000, +}; + +static u32 __init armada_370_get_cpu_freq(void __iomem *sar) +{ + u32 cpu_freq; + u8 cpu_freq_select = 0; + + cpu_freq_select = ((readl(sar) >> SARL_A370_PCLK_FREQ_OPT) & + SARL_A370_PCLK_FREQ_OPT_MASK); + if (cpu_freq_select > ARRAY_SIZE(armada_370_cpu_frequencies)) { + pr_err("CPU freq select unsuported %d\n", cpu_freq_select); + cpu_freq = 0; + } else + cpu_freq = armada_370_cpu_frequencies[cpu_freq_select]; + + return cpu_freq; +} + +enum { A370_XP_NBCLK, A370_XP_HCLK, A370_XP_DRAMCLK }; + +static const struct core_ratio __initconst armada_370_xp_core_ratios[] = { + { .id = A370_XP_NBCLK, .name = "nbclk" }, + { .id = A370_XP_HCLK, .name = "hclk" }, + { .id = A370_XP_DRAMCLK, .name = "dramclk" }, +}; + +static const int __initconst armada_370_xp_nbclk_ratios[32][2] = { + {0, 1}, {1, 2}, {2, 2}, {2, 2}, + {1, 2}, {1, 2}, {1, 1}, {2, 3}, + {0, 1}, {1, 2}, {2, 4}, {0, 1}, + {1, 2}, {0, 1}, {0, 1}, {2, 2}, + {0, 1}, {0, 1}, {0, 1}, {1, 1}, + {2, 3}, {0, 1}, {0, 1}, {0, 1}, + {0, 1}, {0, 1}, {0, 1}, {1, 1}, + {0, 1}, {0, 1}, {0, 1}, {0, 1}, +}; + +static const int __initconst armada_370_xp_hclk_ratios[32][2] = { + {0, 1}, {1, 2}, {2, 6}, {2, 3}, + {1, 3}, {1, 4}, {1, 2}, {2, 6}, + {0, 1}, {1, 6}, {2, 10}, {0, 1}, + {1, 4}, {0, 1}, {0, 1}, {2, 5}, + {0, 1}, {0, 1}, {0, 1}, {1, 2}, + {2, 6}, {0, 1}, {0, 1}, {0, 1}, + {0, 1}, {0, 1}, {0, 1}, {1, 1}, + {0, 1}, {0, 1}, {0, 1}, {0, 1}, +}; + +static const int __initconst armada_370_xp_dramclk_ratios[32][2] = { + {0, 1}, {1, 2}, {2, 3}, {2, 3}, + {1, 3}, {1, 2}, {1, 2}, {2, 6}, + {0, 1}, {1, 3}, {2, 5}, {0, 1}, + {1, 4}, {0, 1}, {0, 1}, {2, 5}, + {0, 1}, {0, 1}, {0, 1}, {1, 1}, + {2, 3}, {0, 1}, {0, 1}, {0, 1}, + {0, 1}, {0, 1}, {0, 1}, {1, 1}, + {0, 1}, {0, 1}, {0, 1}, {0, 1}, +}; + +static void __init armada_370_xp_get_clk_ratio(u32 opt, + void __iomem *sar, int id, int *mult, int *div) +{ + switch (id) { + case A370_XP_NBCLK: + *mult = armada_370_xp_nbclk_ratios[opt][0]; + *div = armada_370_xp_nbclk_ratios[opt][1]; + break; + case A370_XP_HCLK: + *mult = armada_370_xp_hclk_ratios[opt][0]; + *div = armada_370_xp_hclk_ratios[opt][1]; + break; + case A370_XP_DRAMCLK: + *mult = armada_370_xp_dramclk_ratios[opt][0]; + *div = armada_370_xp_dramclk_ratios[opt][1]; + break; + } +} + +static void __init armada_370_get_clk_ratio( + void __iomem *sar, int id, int *mult, int *div) +{ + u32 opt = ((readl(sar) >> SARL_A370_FAB_FREQ_OPT) & + SARL_A370_FAB_FREQ_OPT_MASK); + + armada_370_xp_get_clk_ratio(opt, sar, id, mult, div); +} + + +static const struct core_clocks armada_370_core_clocks = { + .get_tclk_freq = armada_370_get_tclk_freq, + .get_cpu_freq = armada_370_get_cpu_freq, + .get_clk_ratio = armada_370_get_clk_ratio, + .ratios = armada_370_xp_core_ratios, + .num_ratios = ARRAY_SIZE(armada_370_xp_core_ratios), +}; + +static const u32 __initconst armada_xp_cpu_frequencies[] = { + 1000000000, + 1066000000, + 1200000000, + 1333000000, + 1500000000, + 1666000000, + 1800000000, + 2000000000, + 667000000, + 0, + 800000000, + 1600000000, +}; + +/* For Armada XP TCLK frequency is fix: 250MHz */ +static u32 __init armada_xp_get_tclk_freq(void __iomem *sar) +{ + return 250 * 1000 * 1000; +} + +static u32 __init armada_xp_get_cpu_freq(void __iomem *sar) +{ + u32 cpu_freq; + u8 cpu_freq_select = 0; + + cpu_freq_select = ((readl(sar) >> SARL_AXP_PCLK_FREQ_OPT) & + SARL_AXP_PCLK_FREQ_OPT_MASK); + /* + * The upper bit is not contiguous to the other ones and + * located in the high part of the SAR registers + */ + cpu_freq_select |= (((readl(sar+4) >> SARH_AXP_PCLK_FREQ_OPT) & + SARH_AXP_PCLK_FREQ_OPT_MASK) + << SARH_AXP_PCLK_FREQ_OPT_SHIFT); + if (cpu_freq_select > ARRAY_SIZE(armada_xp_cpu_frequencies)) { + pr_err("CPU freq select unsuported: %d\n", cpu_freq_select); + cpu_freq = 0; + } else + cpu_freq = armada_xp_cpu_frequencies[cpu_freq_select]; + + return cpu_freq; +} + +static void __init armada_xp_get_clk_ratio( + void __iomem *sar, int id, int *mult, int *div) +{ + + u32 opt = ((readl(sar) >> SARL_AXP_FAB_FREQ_OPT) & + SARL_AXP_FAB_FREQ_OPT_MASK); + /* + * The upper bit is not contiguous to the other ones and + * located in the high part of the SAR registers + */ + opt |= (((readl(sar+4) >> SARH_AXP_FAB_FREQ_OPT) & + SARH_AXP_FAB_FREQ_OPT_MASK) + << SARH_AXP_FAB_FREQ_OPT_SHIFT); + + armada_370_xp_get_clk_ratio(opt, sar, id, mult, div); +} + +static const struct core_clocks armada_xp_core_clocks = { + .get_tclk_freq = armada_xp_get_tclk_freq, + .get_cpu_freq = armada_xp_get_cpu_freq, + .get_clk_ratio = armada_xp_get_clk_ratio, + .ratios = armada_370_xp_core_ratios, + .num_ratios = ARRAY_SIZE(armada_370_xp_core_ratios), +}; + +#endif /* CONFIG_MACH_ARMADA_370_XP */ + +/* + * Dove PLL sample-at-reset configuration + * + * SAR0[8:5] : CPU frequency + * 5 = 1000 MHz + * 6 = 933 MHz + * 7 = 933 MHz + * 8 = 800 MHz + * 9 = 800 MHz + * 10 = 800 MHz + * 11 = 1067 MHz + * 12 = 667 MHz + * 13 = 533 MHz + * 14 = 400 MHz + * 15 = 333 MHz + * others reserved. + * + * SAR0[11:9] : CPU to L2 Clock divider ratio + * 0 = (1/1) * CPU + * 2 = (1/2) * CPU + * 4 = (1/3) * CPU + * 6 = (1/4) * CPU + * others reserved. + * + * SAR0[15:12] : CPU to DDR DRAM Clock divider ratio + * 0 = (1/1) * CPU + * 2 = (1/2) * CPU + * 3 = (2/5) * CPU + * 4 = (1/3) * CPU + * 6 = (1/4) * CPU + * 8 = (1/5) * CPU + * 10 = (1/6) * CPU + * 12 = (1/7) * CPU + * 14 = (1/8) * CPU + * 15 = (1/10) * CPU + * others reserved. + * + * SAR0[24:23] : TCLK frequency + * 0 = 166 MHz + * 1 = 125 MHz + * others reserved. + */ +#ifdef CONFIG_ARCH_DOVE +#define SAR_DOVE_CPU_FREQ 5 +#define SAR_DOVE_CPU_FREQ_MASK 0xf +#define SAR_DOVE_L2_RATIO 9 +#define SAR_DOVE_L2_RATIO_MASK 0x7 +#define SAR_DOVE_DDR_RATIO 12 +#define SAR_DOVE_DDR_RATIO_MASK 0xf +#define SAR_DOVE_TCLK_FREQ 23 +#define SAR_DOVE_TCLK_FREQ_MASK 0x3 + +static const u32 __initconst dove_tclk_frequencies[] = { + 166666667, + 125000000, + 0, 0 +}; + +static u32 __init dove_get_tclk_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_DOVE_TCLK_FREQ) & + SAR_DOVE_TCLK_FREQ_MASK; + return dove_tclk_frequencies[opt]; +} + +static const u32 __initconst dove_cpu_frequencies[] = { + 0, 0, 0, 0, 0, + 1000000000, + 933333333, 933333333, + 800000000, 800000000, 800000000, + 1066666667, + 666666667, + 533333333, + 400000000, + 333333333 +}; + +static u32 __init dove_get_cpu_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_DOVE_CPU_FREQ) & + SAR_DOVE_CPU_FREQ_MASK; + return dove_cpu_frequencies[opt]; +} + +enum { DOVE_CPU_TO_L2, DOVE_CPU_TO_DDR }; + +static const struct core_ratio __initconst dove_core_ratios[] = { + { .id = DOVE_CPU_TO_L2, .name = "l2clk", }, + { .id = DOVE_CPU_TO_DDR, .name = "ddrclk", } +}; + +static const int __initconst dove_cpu_l2_ratios[8][2] = { + { 1, 1 }, { 0, 1 }, { 1, 2 }, { 0, 1 }, + { 1, 3 }, { 0, 1 }, { 1, 4 }, { 0, 1 } +}; + +static const int __initconst dove_cpu_ddr_ratios[16][2] = { + { 1, 1 }, { 0, 1 }, { 1, 2 }, { 2, 5 }, + { 1, 3 }, { 0, 1 }, { 1, 4 }, { 0, 1 }, + { 1, 5 }, { 0, 1 }, { 1, 6 }, { 0, 1 }, + { 1, 7 }, { 0, 1 }, { 1, 8 }, { 1, 10 } +}; + +static void __init dove_get_clk_ratio( + void __iomem *sar, int id, int *mult, int *div) +{ + switch (id) { + case DOVE_CPU_TO_L2: + { + u32 opt = (readl(sar) >> SAR_DOVE_L2_RATIO) & + SAR_DOVE_L2_RATIO_MASK; + *mult = dove_cpu_l2_ratios[opt][0]; + *div = dove_cpu_l2_ratios[opt][1]; + break; + } + case DOVE_CPU_TO_DDR: + { + u32 opt = (readl(sar) >> SAR_DOVE_DDR_RATIO) & + SAR_DOVE_DDR_RATIO_MASK; + *mult = dove_cpu_ddr_ratios[opt][0]; + *div = dove_cpu_ddr_ratios[opt][1]; + break; + } + } +} + +static const struct core_clocks dove_core_clocks = { + .get_tclk_freq = dove_get_tclk_freq, + .get_cpu_freq = dove_get_cpu_freq, + .get_clk_ratio = dove_get_clk_ratio, + .ratios = dove_core_ratios, + .num_ratios = ARRAY_SIZE(dove_core_ratios), +}; +#endif /* CONFIG_ARCH_DOVE */ + +/* + * Kirkwood PLL sample-at-reset configuration + * (6180 has different SAR layout than other Kirkwood SoCs) + * + * SAR0[4:3,22,1] : CPU frequency (6281,6292,6282) + * 4 = 600 MHz + * 6 = 800 MHz + * 7 = 1000 MHz + * 9 = 1200 MHz + * 12 = 1500 MHz + * 13 = 1600 MHz + * 14 = 1800 MHz + * 15 = 2000 MHz + * others reserved. + * + * SAR0[19,10:9] : CPU to L2 Clock divider ratio (6281,6292,6282) + * 1 = (1/2) * CPU + * 3 = (1/3) * CPU + * 5 = (1/4) * CPU + * others reserved. + * + * SAR0[8:5] : CPU to DDR DRAM Clock divider ratio (6281,6292,6282) + * 2 = (1/2) * CPU + * 4 = (1/3) * CPU + * 6 = (1/4) * CPU + * 7 = (2/9) * CPU + * 8 = (1/5) * CPU + * 9 = (1/6) * CPU + * others reserved. + * + * SAR0[4:2] : Kirkwood 6180 cpu/l2/ddr clock configuration (6180 only) + * 5 = [CPU = 600 MHz, L2 = (1/2) * CPU, DDR = 200 MHz = (1/3) * CPU] + * 6 = [CPU = 800 MHz, L2 = (1/2) * CPU, DDR = 200 MHz = (1/4) * CPU] + * 7 = [CPU = 1000 MHz, L2 = (1/2) * CPU, DDR = 200 MHz = (1/5) * CPU] + * others reserved. + * + * SAR0[21] : TCLK frequency + * 0 = 200 MHz + * 1 = 166 MHz + * others reserved. + */ +#ifdef CONFIG_ARCH_KIRKWOOD +#define SAR_KIRKWOOD_CPU_FREQ(x) \ + (((x & (1 << 1)) >> 1) | \ + ((x & (1 << 22)) >> 21) | \ + ((x & (3 << 3)) >> 1)) +#define SAR_KIRKWOOD_L2_RATIO(x) \ + (((x & (3 << 9)) >> 9) | \ + (((x & (1 << 19)) >> 17))) +#define SAR_KIRKWOOD_DDR_RATIO 5 +#define SAR_KIRKWOOD_DDR_RATIO_MASK 0xf +#define SAR_MV88F6180_CLK 2 +#define SAR_MV88F6180_CLK_MASK 0x7 +#define SAR_KIRKWOOD_TCLK_FREQ 21 +#define SAR_KIRKWOOD_TCLK_FREQ_MASK 0x1 + +enum { KIRKWOOD_CPU_TO_L2, KIRKWOOD_CPU_TO_DDR }; + +static const struct core_ratio __initconst kirkwood_core_ratios[] = { + { .id = KIRKWOOD_CPU_TO_L2, .name = "l2clk", }, + { .id = KIRKWOOD_CPU_TO_DDR, .name = "ddrclk", } +}; + +static u32 __init kirkwood_get_tclk_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_KIRKWOOD_TCLK_FREQ) & + SAR_KIRKWOOD_TCLK_FREQ_MASK; + return (opt) ? 166666667 : 200000000; +} + +static const u32 __initconst kirkwood_cpu_frequencies[] = { + 0, 0, 0, 0, + 600000000, + 0, + 800000000, + 1000000000, + 0, + 1200000000, + 0, 0, + 1500000000, + 1600000000, + 1800000000, + 2000000000 +}; + +static u32 __init kirkwood_get_cpu_freq(void __iomem *sar) +{ + u32 opt = SAR_KIRKWOOD_CPU_FREQ(readl(sar)); + return kirkwood_cpu_frequencies[opt]; +} + +static const int __initconst kirkwood_cpu_l2_ratios[8][2] = { + { 0, 1 }, { 1, 2 }, { 0, 1 }, { 1, 3 }, + { 0, 1 }, { 1, 4 }, { 0, 1 }, { 0, 1 } +}; + +static const int __initconst kirkwood_cpu_ddr_ratios[16][2] = { + { 0, 1 }, { 0, 1 }, { 1, 2 }, { 0, 1 }, + { 1, 3 }, { 0, 1 }, { 1, 4 }, { 2, 9 }, + { 1, 5 }, { 1, 6 }, { 0, 1 }, { 0, 1 }, + { 0, 1 }, { 0, 1 }, { 0, 1 }, { 0, 1 } +}; + +static void __init kirkwood_get_clk_ratio( + void __iomem *sar, int id, int *mult, int *div) +{ + switch (id) { + case KIRKWOOD_CPU_TO_L2: + { + u32 opt = SAR_KIRKWOOD_L2_RATIO(readl(sar)); + *mult = kirkwood_cpu_l2_ratios[opt][0]; + *div = kirkwood_cpu_l2_ratios[opt][1]; + break; + } + case KIRKWOOD_CPU_TO_DDR: + { + u32 opt = (readl(sar) >> SAR_KIRKWOOD_DDR_RATIO) & + SAR_KIRKWOOD_DDR_RATIO_MASK; + *mult = kirkwood_cpu_ddr_ratios[opt][0]; + *div = kirkwood_cpu_ddr_ratios[opt][1]; + break; + } + } +} + +static const struct core_clocks kirkwood_core_clocks = { + .get_tclk_freq = kirkwood_get_tclk_freq, + .get_cpu_freq = kirkwood_get_cpu_freq, + .get_clk_ratio = kirkwood_get_clk_ratio, + .ratios = kirkwood_core_ratios, + .num_ratios = ARRAY_SIZE(kirkwood_core_ratios), +}; + +static const u32 __initconst mv88f6180_cpu_frequencies[] = { + 0, 0, 0, 0, 0, + 600000000, + 800000000, + 1000000000 +}; + +static u32 __init mv88f6180_get_cpu_freq(void __iomem *sar) +{ + u32 opt = (readl(sar) >> SAR_MV88F6180_CLK) & SAR_MV88F6180_CLK_MASK; + return mv88f6180_cpu_frequencies[opt]; +} + +static const int __initconst mv88f6180_cpu_ddr_ratios[8][2] = { + { 0, 1 }, { 0, 1 }, { 0, 1 }, { 0, 1 }, + { 0, 1 }, { 1, 3 }, { 1, 4 }, { 1, 5 } +}; + +static void __init mv88f6180_get_clk_ratio( + void __iomem *sar, int id, int *mult, int *div) +{ + switch (id) { + case KIRKWOOD_CPU_TO_L2: + { + /* mv88f6180 has a fixed 1:2 CPU-to-L2 ratio */ + *mult = 1; + *div = 2; + break; + } + case KIRKWOOD_CPU_TO_DDR: + { + u32 opt = (readl(sar) >> SAR_MV88F6180_CLK) & + SAR_MV88F6180_CLK_MASK; + *mult = mv88f6180_cpu_ddr_ratios[opt][0]; + *div = mv88f6180_cpu_ddr_ratios[opt][1]; + break; + } + } +} + +static const struct core_clocks mv88f6180_core_clocks = { + .get_tclk_freq = kirkwood_get_tclk_freq, + .get_cpu_freq = mv88f6180_get_cpu_freq, + .get_clk_ratio = mv88f6180_get_clk_ratio, + .ratios = kirkwood_core_ratios, + .num_ratios = ARRAY_SIZE(kirkwood_core_ratios), +}; +#endif /* CONFIG_ARCH_KIRKWOOD */ + +static const __initdata struct of_device_id clk_core_match[] = { +#ifdef CONFIG_MACH_ARMADA_370_XP + { + .compatible = "marvell,armada-370-core-clock", + .data = &armada_370_core_clocks, + }, + { + .compatible = "marvell,armada-xp-core-clock", + .data = &armada_xp_core_clocks, + }, +#endif +#ifdef CONFIG_ARCH_DOVE + { + .compatible = "marvell,dove-core-clock", + .data = &dove_core_clocks, + }, +#endif + +#ifdef CONFIG_ARCH_KIRKWOOD + { + .compatible = "marvell,kirkwood-core-clock", + .data = &kirkwood_core_clocks, + }, + { + .compatible = "marvell,mv88f6180-core-clock", + .data = &mv88f6180_core_clocks, + }, +#endif + + { } +}; + +void __init mvebu_core_clk_init(void) +{ + struct device_node *np; + + for_each_matching_node(np, clk_core_match) { + const struct of_device_id *match = + of_match_node(clk_core_match, np); + mvebu_clk_core_setup(np, (struct core_clocks *)match->data); + } +} diff --git a/drivers/clk/mvebu/clk-core.h b/drivers/clk/mvebu/clk-core.h new file mode 100644 index 000000000000..28b5e02e9885 --- /dev/null +++ b/drivers/clk/mvebu/clk-core.h @@ -0,0 +1,18 @@ +/* + * * Marvell EBU clock core handling defined at reset + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef __MVEBU_CLK_CORE_H +#define __MVEBU_CLK_CORE_H + +void __init mvebu_core_clk_init(void); + +#endif diff --git a/drivers/clk/mvebu/clk.c b/drivers/clk/mvebu/clk.c new file mode 100644 index 000000000000..e6742acf9880 --- /dev/null +++ b/drivers/clk/mvebu/clk.c @@ -0,0 +1,23 @@ +/* + * Marvell EBU SoC clock handling. + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ +#include +#include +#include +#include +#include +#include +#include "clk-core.h" + +void __init mvebu_clocks_init(void) +{ + mvebu_core_clk_init(); +} diff --git a/include/linux/clk/mvebu.h b/include/linux/clk/mvebu.h new file mode 100644 index 000000000000..8c4ae713b063 --- /dev/null +++ b/include/linux/clk/mvebu.h @@ -0,0 +1,22 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#ifndef __CLK_MVEBU_H_ +#define __CLK_MVEBU_H_ + +void __init mvebu_clocks_init(void); + +#endif -- cgit v1.2.3 From ab8ba01b3fe5e0b81bd2da0afe66f7f6968e017b Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Sat, 17 Nov 2012 15:22:23 +0100 Subject: clk: mvebu: add armada-370-xp CPU specific clocks Add Armada 370/XP specific CPU clocks Signed-off-by: Gregory CLEMENT Signed-off-by: Thomas Petazzoni Tested-by: Gregory CLEMENT --- .../devicetree/bindings/clock/mvebu-cpu-clock.txt | 21 +++ drivers/clk/mvebu/Kconfig | 3 + drivers/clk/mvebu/Makefile | 1 + drivers/clk/mvebu/clk-cpu.c | 186 +++++++++++++++++++++ drivers/clk/mvebu/clk-cpu.h | 22 +++ drivers/clk/mvebu/clk.c | 2 + 6 files changed, 235 insertions(+) create mode 100644 Documentation/devicetree/bindings/clock/mvebu-cpu-clock.txt create mode 100644 drivers/clk/mvebu/clk-cpu.c create mode 100644 drivers/clk/mvebu/clk-cpu.h (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/clock/mvebu-cpu-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-cpu-clock.txt new file mode 100644 index 000000000000..feb830130714 --- /dev/null +++ b/Documentation/devicetree/bindings/clock/mvebu-cpu-clock.txt @@ -0,0 +1,21 @@ +Device Tree Clock bindings for cpu clock of Marvell EBU platforms + +Required properties: +- compatible : shall be one of the following: + "marvell,armada-xp-cpu-clock" - cpu clocks for Armada XP +- reg : Address and length of the clock complex register set +- #clock-cells : should be set to 1. +- clocks : shall be the input parent clock phandle for the clock. + +cpuclk: clock-complex@d0018700 { + #clock-cells = <1>; + compatible = "marvell,armada-xp-cpu-clock"; + reg = <0xd0018700 0xA0>; + clocks = <&coreclk 1>; +} + +cpu@0 { + compatible = "marvell,sheeva-v7"; + reg = <0>; + clocks = <&cpuclk 0>; +}; diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig index fd7bf97ff74d..1dd93ada9cb3 100644 --- a/drivers/clk/mvebu/Kconfig +++ b/drivers/clk/mvebu/Kconfig @@ -1,3 +1,6 @@ config MVEBU_CLK_CORE bool +config MVEBU_CLK_CPU + bool + diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile index de1d9617f75a..93da083b77e3 100644 --- a/drivers/clk/mvebu/Makefile +++ b/drivers/clk/mvebu/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_MVEBU_CLK_CORE) += clk.o clk-core.o +obj-$(CONFIG_MVEBU_CLK_CPU) += clk-cpu.o diff --git a/drivers/clk/mvebu/clk-cpu.c b/drivers/clk/mvebu/clk-cpu.c new file mode 100644 index 000000000000..ff004578a119 --- /dev/null +++ b/drivers/clk/mvebu/clk-cpu.c @@ -0,0 +1,186 @@ +/* + * Marvell MVEBU CPU clock handling. + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ +#include +#include +#include +#include +#include +#include +#include +#include "clk-cpu.h" + +#define SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET 0x0 +#define SYS_CTRL_CLK_DIVIDER_VALUE_OFFSET 0xC +#define SYS_CTRL_CLK_DIVIDER_MASK 0x3F + +#define MAX_CPU 4 +struct cpu_clk { + struct clk_hw hw; + int cpu; + const char *clk_name; + const char *parent_name; + void __iomem *reg_base; +}; + +static struct clk **clks; + +static struct clk_onecell_data clk_data; + +#define to_cpu_clk(p) container_of(p, struct cpu_clk, hw) + +static unsigned long clk_cpu_recalc_rate(struct clk_hw *hwclk, + unsigned long parent_rate) +{ + struct cpu_clk *cpuclk = to_cpu_clk(hwclk); + u32 reg, div; + + reg = readl(cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_VALUE_OFFSET); + div = (reg >> (cpuclk->cpu * 8)) & SYS_CTRL_CLK_DIVIDER_MASK; + return parent_rate / div; +} + +static long clk_cpu_round_rate(struct clk_hw *hwclk, unsigned long rate, + unsigned long *parent_rate) +{ + /* Valid ratio are 1:1, 1:2 and 1:3 */ + u32 div; + + div = *parent_rate / rate; + if (div == 0) + div = 1; + else if (div > 3) + div = 3; + + return *parent_rate / div; +} + +static int clk_cpu_set_rate(struct clk_hw *hwclk, unsigned long rate, + unsigned long parent_rate) +{ + struct cpu_clk *cpuclk = to_cpu_clk(hwclk); + u32 reg, div; + u32 reload_mask; + + div = parent_rate / rate; + reg = (readl(cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_VALUE_OFFSET) + & (~(SYS_CTRL_CLK_DIVIDER_MASK << (cpuclk->cpu * 8)))) + | (div << (cpuclk->cpu * 8)); + writel(reg, cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_VALUE_OFFSET); + /* Set clock divider reload smooth bit mask */ + reload_mask = 1 << (20 + cpuclk->cpu); + + reg = readl(cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET) + | reload_mask; + writel(reg, cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET); + + /* Now trigger the clock update */ + reg = readl(cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET) + | 1 << 24; + writel(reg, cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET); + + /* Wait for clocks to settle down then clear reload request */ + udelay(1000); + reg &= ~(reload_mask | 1 << 24); + writel(reg, cpuclk->reg_base + SYS_CTRL_CLK_DIVIDER_CTRL_OFFSET); + udelay(1000); + + return 0; +} + +static const struct clk_ops cpu_ops = { + .recalc_rate = clk_cpu_recalc_rate, + .round_rate = clk_cpu_round_rate, + .set_rate = clk_cpu_set_rate, +}; + +void __init of_cpu_clk_setup(struct device_node *node) +{ + struct cpu_clk *cpuclk; + void __iomem *clock_complex_base = of_iomap(node, 0); + int ncpus = 0; + struct device_node *dn; + + if (clock_complex_base == NULL) { + pr_err("%s: clock-complex base register not set\n", + __func__); + return; + } + + for_each_node_by_type(dn, "cpu") + ncpus++; + + cpuclk = kzalloc(ncpus * sizeof(*cpuclk), GFP_KERNEL); + if (WARN_ON(!cpuclk)) + return; + + clks = kzalloc(ncpus * sizeof(*clks), GFP_KERNEL); + if (WARN_ON(!clks)) + return; + + for_each_node_by_type(dn, "cpu") { + struct clk_init_data init; + struct clk *clk; + struct clk *parent_clk; + char *clk_name = kzalloc(5, GFP_KERNEL); + int cpu, err; + + if (WARN_ON(!clk_name)) + return; + + err = of_property_read_u32(dn, "reg", &cpu); + if (WARN_ON(err)) + return; + + sprintf(clk_name, "cpu%d", cpu); + parent_clk = of_clk_get(node, 0); + + cpuclk[cpu].parent_name = __clk_get_name(parent_clk); + cpuclk[cpu].clk_name = clk_name; + cpuclk[cpu].cpu = cpu; + cpuclk[cpu].reg_base = clock_complex_base; + cpuclk[cpu].hw.init = &init; + + init.name = cpuclk[cpu].clk_name; + init.ops = &cpu_ops; + init.flags = 0; + init.parent_names = &cpuclk[cpu].parent_name; + init.num_parents = 1; + + clk = clk_register(NULL, &cpuclk[cpu].hw); + if (WARN_ON(IS_ERR(clk))) + goto bail_out; + clks[cpu] = clk; + } + clk_data.clk_num = MAX_CPU; + clk_data.clks = clks; + of_clk_add_provider(node, of_clk_src_onecell_get, &clk_data); + + return; +bail_out: + kfree(clks); + kfree(cpuclk); +} + +static const __initconst struct of_device_id clk_cpu_match[] = { + { + .compatible = "marvell,armada-xp-cpu-clock", + .data = of_cpu_clk_setup, + }, + { + /* sentinel */ + }, +}; + +void __init mvebu_cpu_clk_init(void) +{ + of_clk_init(clk_cpu_match); +} diff --git a/drivers/clk/mvebu/clk-cpu.h b/drivers/clk/mvebu/clk-cpu.h new file mode 100644 index 000000000000..08e2affba4e6 --- /dev/null +++ b/drivers/clk/mvebu/clk-cpu.h @@ -0,0 +1,22 @@ +/* + * Marvell MVEBU CPU clock handling. + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef __MVEBU_CLK_CPU_H +#define __MVEBU_CLK_CPU_H + +#ifdef CONFIG_MVEBU_CLK_CPU +void __init mvebu_cpu_clk_init(void); +#else +static inline void mvebu_cpu_clk_init(void) {} +#endif + +#endif diff --git a/drivers/clk/mvebu/clk.c b/drivers/clk/mvebu/clk.c index e6742acf9880..e3d4d7a9de6a 100644 --- a/drivers/clk/mvebu/clk.c +++ b/drivers/clk/mvebu/clk.c @@ -16,8 +16,10 @@ #include #include #include "clk-core.h" +#include "clk-cpu.h" void __init mvebu_clocks_init(void) { mvebu_core_clk_init(); + mvebu_cpu_clk_init(); } -- cgit v1.2.3 From f97d0d7aa8f8cec29a24d65afa12a777c6d2a2f1 Mon Sep 17 00:00:00 2001 From: Sebastian Hesselbarth Date: Sat, 17 Nov 2012 15:22:26 +0100 Subject: clk: mvebu: add clock gating control provider for DT This driver allows to provide DT clocks for clock gates found on Marvell Dove and Kirkwood SoCs. The clock gates are referenced by the phandle index of the corresponding bit in the clock gating control register to ease lookup in the datasheet. Signed-off-by: Sebastian Hesselbarth --- .../bindings/clock/mvebu-gated-clock.txt | 76 +++++++++ drivers/clk/mvebu/Kconfig | 2 + drivers/clk/mvebu/Makefile | 1 + drivers/clk/mvebu/clk-gating-ctrl.c | 177 +++++++++++++++++++++ drivers/clk/mvebu/clk-gating-ctrl.h | 22 +++ drivers/clk/mvebu/clk.c | 2 + 6 files changed, 280 insertions(+) create mode 100644 Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt create mode 100644 drivers/clk/mvebu/clk-gating-ctrl.c create mode 100644 drivers/clk/mvebu/clk-gating-ctrl.h (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt new file mode 100644 index 000000000000..4ad8ccd15e67 --- /dev/null +++ b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt @@ -0,0 +1,76 @@ +* Gated Clock bindings for Marvell Orion SoCs + +Marvell Dove and Kirkwood allow some peripheral clocks to be gated to save +some power. The clock consumer should specify the desired clock by having +the clock ID in its "clocks" phandle cell. The clock ID is directly mapped to +the corresponding clock gating control bit in HW to ease manual clock lookup +in datasheet. + +The following is a list of provided IDs for Dove: +ID Clock Peripheral +----------------------------------- +0 usb0 USB Host 0 +1 usb1 USB Host 1 +2 ge Gigabit Ethernet +3 sata SATA Host +4 pex0 PCIe Cntrl 0 +5 pex1 PCIe Cntrl 1 +8 sdio0 SDHCI Host 0 +9 sdio1 SDHCI Host 1 +10 nand NAND Cntrl +11 camera Camera Cntrl +12 i2s0 I2S Cntrl 0 +13 i2s1 I2S Cntrl 1 +15 crypto CESA engine +21 ac97 AC97 Cntrl +22 pdma Peripheral DMA +23 xor0 XOR DMA 0 +24 xor1 XOR DMA 1 +30 gephy Gigabit Ethernel PHY +Note: gephy(30) is implemented as a parent clock of ge(2) + +The following is a list of provided IDs for Kirkwood: +ID Clock Peripheral +----------------------------------- +0 ge0 Gigabit Ethernet 0 +2 pex0 PCIe Cntrl 0 +3 usb0 USB Host 0 +4 sdio SDIO Cntrl +5 tsu Transp. Stream Unit +6 dunit SDRAM Cntrl +7 runit Runit +8 xor0 XOR DMA 0 +9 audio I2S Cntrl 0 +14 sata0 SATA Host 0 +15 sata1 SATA Host 1 +16 xor1 XOR DMA 1 +17 crypto CESA engine +18 pex1 PCIe Cntrl 1 +19 ge1 Gigabit Ethernet 0 +20 tdm Time Division Mplx + +Required properties: +- compatible : shall be one of the following: + "marvell,dove-gating-clock" - for Dove SoC clock gating + "marvell,kirkwood-gating-clock" - for Kirkwood SoC clock gating +- reg : shall be the register address of the Clock Gating Control register +- #clock-cells : from common clock binding; shall be set to 1 + +Optional properties: +- clocks : default parent clock phandle (e.g. tclk) + +Example: + +gate_clk: clock-gating-control@d0038 { + compatible = "marvell,dove-gating-clock"; + reg = <0xd0038 0x4>; + /* default parent clock is tclk */ + clocks = <&core_clk 0>; + #clock-cells = <1>; +}; + +sdio0: sdio@92000 { + compatible = "marvell,dove-sdhci"; + /* get clk gate bit 8 (sdio0) */ + clocks = <&gate_clk 8>; +}; diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig index 1dd93ada9cb3..57323fd15ec9 100644 --- a/drivers/clk/mvebu/Kconfig +++ b/drivers/clk/mvebu/Kconfig @@ -4,3 +4,5 @@ config MVEBU_CLK_CORE config MVEBU_CLK_CPU bool +config MVEBU_CLK_GATING + bool diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile index 93da083b77e3..58df3dc49363 100644 --- a/drivers/clk/mvebu/Makefile +++ b/drivers/clk/mvebu/Makefile @@ -1,2 +1,3 @@ obj-$(CONFIG_MVEBU_CLK_CORE) += clk.o clk-core.o obj-$(CONFIG_MVEBU_CLK_CPU) += clk-cpu.o +obj-$(CONFIG_MVEBU_CLK_GATING) += clk-gating-ctrl.o diff --git a/drivers/clk/mvebu/clk-gating-ctrl.c b/drivers/clk/mvebu/clk-gating-ctrl.c new file mode 100644 index 000000000000..fa69f87c5797 --- /dev/null +++ b/drivers/clk/mvebu/clk-gating-ctrl.c @@ -0,0 +1,177 @@ +/* + * Marvell MVEBU clock gating control. + * + * Sebastian Hesselbarth + * Andrew Lunn + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct mvebu_gating_ctrl { + spinlock_t lock; + struct clk **gates; + int num_gates; +}; + +struct mvebu_soc_descr { + const char *name; + const char *parent; + int bit_idx; +}; + +#define to_clk_gate(_hw) container_of(_hw, struct clk_gate, hw) + +static struct clk __init *mvebu_clk_gating_get_src( + struct of_phandle_args *clkspec, void *data) +{ + struct mvebu_gating_ctrl *ctrl = (struct mvebu_gating_ctrl *)data; + int n; + + if (clkspec->args_count < 1) + return ERR_PTR(-EINVAL); + + for (n = 0; n < ctrl->num_gates; n++) { + struct clk_gate *gate = + to_clk_gate(__clk_get_hw(ctrl->gates[n])); + if (clkspec->args[0] == gate->bit_idx) + return ctrl->gates[n]; + } + return ERR_PTR(-ENODEV); +} + +static void __init mvebu_clk_gating_setup( + struct device_node *np, const struct mvebu_soc_descr *descr) +{ + struct mvebu_gating_ctrl *ctrl; + struct clk *clk; + void __iomem *base; + const char *default_parent = NULL; + int n; + + base = of_iomap(np, 0); + + clk = of_clk_get(np, 0); + if (!IS_ERR(clk)) { + default_parent = __clk_get_name(clk); + clk_put(clk); + } + + ctrl = kzalloc(sizeof(struct mvebu_gating_ctrl), GFP_KERNEL); + if (WARN_ON(!ctrl)) + return; + + spin_lock_init(&ctrl->lock); + + /* + * Count, allocate, and register clock gates + */ + for (n = 0; descr[n].name;) + n++; + + ctrl->num_gates = n; + ctrl->gates = kzalloc(ctrl->num_gates * sizeof(struct clk *), + GFP_KERNEL); + if (WARN_ON(!ctrl->gates)) { + kfree(ctrl); + return; + } + + for (n = 0; n < ctrl->num_gates; n++) { + const char *parent = + (descr[n].parent) ? descr[n].parent : default_parent; + ctrl->gates[n] = clk_register_gate(NULL, descr[n].name, parent, + 0, base, descr[n].bit_idx, 0, &ctrl->lock); + WARN_ON(IS_ERR(ctrl->gates[n])); + } + of_clk_add_provider(np, mvebu_clk_gating_get_src, ctrl); +} + +/* + * SoC specific clock gating control + */ + +#ifdef CONFIG_ARCH_DOVE +static const struct mvebu_soc_descr __initconst dove_gating_descr[] = { + { "usb0", NULL, 0 }, + { "usb1", NULL, 1 }, + { "ge", "gephy", 2 }, + { "sata", NULL, 3 }, + { "pex0", NULL, 4 }, + { "pex1", NULL, 5 }, + { "sdio0", NULL, 8 }, + { "sdio1", NULL, 9 }, + { "nand", NULL, 10 }, + { "camera", NULL, 11 }, + { "i2s0", NULL, 12 }, + { "i2s1", NULL, 13 }, + { "crypto", NULL, 15 }, + { "ac97", NULL, 21 }, + { "pdma", NULL, 22 }, + { "xor0", NULL, 23 }, + { "xor1", NULL, 24 }, + { "gephy", NULL, 30 }, + { } +}; +#endif + +#ifdef CONFIG_ARCH_KIRKWOOD +static const struct mvebu_soc_descr __initconst kirkwood_gating_descr[] = { + { "ge0", NULL, 0 }, + { "pex0", NULL, 2 }, + { "usb0", NULL, 3 }, + { "sdio", NULL, 4 }, + { "tsu", NULL, 5 }, + { "runit", NULL, 7 }, + { "xor0", NULL, 8 }, + { "audio", NULL, 9 }, + { "sata0", NULL, 14 }, + { "sata1", NULL, 15 }, + { "xor1", NULL, 16 }, + { "crypto", NULL, 17 }, + { "pex1", NULL, 18 }, + { "ge1", NULL, 19 }, + { "tdm", NULL, 20 }, + { } +}; +#endif + +static const __initdata struct of_device_id clk_gating_match[] = { +#ifdef CONFIG_ARCH_DOVE + { + .compatible = "marvell,dove-gating-clock", + .data = dove_gating_descr, + }, +#endif + +#ifdef CONFIG_ARCH_KIRKWOOD + { + .compatible = "marvell,kirkwood-gating-clock", + .data = kirkwood_gating_descr, + }, +#endif + + { } +}; + +void __init mvebu_gating_clk_init(void) +{ + struct device_node *np; + + for_each_matching_node(np, clk_gating_match) { + const struct of_device_id *match = + of_match_node(clk_gating_match, np); + mvebu_clk_gating_setup(np, + (const struct mvebu_soc_descr *)match->data); + } +} diff --git a/drivers/clk/mvebu/clk-gating-ctrl.h b/drivers/clk/mvebu/clk-gating-ctrl.h new file mode 100644 index 000000000000..9275d1e51f1b --- /dev/null +++ b/drivers/clk/mvebu/clk-gating-ctrl.h @@ -0,0 +1,22 @@ +/* + * Marvell EBU gating clock handling + * + * Copyright (C) 2012 Marvell + * + * Thomas Petazzoni + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef __MVEBU_CLK_GATING_H +#define __MVEBU_CLK_GATING_H + +#ifdef CONFIG_MVEBU_CLK_GATING +void __init mvebu_gating_clk_init(void); +#else +void mvebu_gating_clk_init(void) {} +#endif + +#endif diff --git a/drivers/clk/mvebu/clk.c b/drivers/clk/mvebu/clk.c index e3d4d7a9de6a..855681b8a9dc 100644 --- a/drivers/clk/mvebu/clk.c +++ b/drivers/clk/mvebu/clk.c @@ -17,9 +17,11 @@ #include #include "clk-core.h" #include "clk-cpu.h" +#include "clk-gating-ctrl.h" void __init mvebu_clocks_init(void) { mvebu_core_clk_init(); + mvebu_gating_clk_init(); mvebu_cpu_clk_init(); } -- cgit v1.2.3 From c4c34d608482b48c1c007fecea5a7a5c65168fa2 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Sat, 17 Nov 2012 15:22:29 +0100 Subject: clk: mvebu: armada 370/XP add clock gating control provider for DT Signed-off-by: Gregory CLEMENT Signed-off-by: Andrew Lunn Signed-off-by: Thomas Petazzoni --- .../bindings/clock/mvebu-gated-clock.txt | 43 +++++++++++++ drivers/clk/mvebu/clk-gating-ctrl.c | 74 +++++++++++++++++++++- 2 files changed, 116 insertions(+), 1 deletion(-) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt index 4ad8ccd15e67..7337005ef5e1 100644 --- a/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt +++ b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt @@ -6,6 +6,49 @@ the clock ID in its "clocks" phandle cell. The clock ID is directly mapped to the corresponding clock gating control bit in HW to ease manual clock lookup in datasheet. +The following is a list of provided IDs for Armada 370: +ID Clock Peripheral +----------------------------------- +0 Audio AC97 Cntrl +1 pex0_en PCIe 0 Clock out +2 pex1_en PCIe 1 Clock out +3 ge1 Gigabit Ethernet 1 +4 ge0 Gigabit Ethernet 0 +5 pex0 PCIe Cntrl 0 +9 pex1 PCIe Cntrl 1 +15 sata0 SATA Host 0 +17 sdio SDHCI Host +25 tdm Time Division Mplx +28 ddr DDR Cntrl +30 sata1 SATA Host 0 + +The following is a list of provided IDs for Armada XP: +ID Clock Peripheral +----------------------------------- +0 audio Audio Cntrl +1 ge3 Gigabit Ethernet 3 +2 ge2 Gigabit Ethernet 2 +3 ge1 Gigabit Ethernet 1 +4 ge0 Gigabit Ethernet 0 +5 pex0 PCIe Cntrl 0 +6 pex1 PCIe Cntrl 1 +7 pex2 PCIe Cntrl 2 +8 pex3 PCIe Cntrl 3 +13 bp +14 sata0lnk +15 sata0 SATA Host 0 +16 lcd LCD Cntrl +17 sdio SDHCI Host +18 usb0 USB Host 0 +19 usb1 USB Host 1 +20 usb2 USB Host 2 +22 xor0 XOR DMA 0 +23 crypto CESA engine +25 tdm Time Division Mplx +28 xor1 XOR DMA 1 +29 sata1lnk +30 sata1 SATA Host 0 + The following is a list of provided IDs for Dove: ID Clock Peripheral ----------------------------------- diff --git a/drivers/clk/mvebu/clk-gating-ctrl.c b/drivers/clk/mvebu/clk-gating-ctrl.c index fa69f87c5797..c6d3c263b070 100644 --- a/drivers/clk/mvebu/clk-gating-ctrl.c +++ b/drivers/clk/mvebu/clk-gating-ctrl.c @@ -88,10 +88,21 @@ static void __init mvebu_clk_gating_setup( } for (n = 0; n < ctrl->num_gates; n++) { + u8 flags = 0; const char *parent = (descr[n].parent) ? descr[n].parent : default_parent; + + /* + * On Armada 370, the DDR clock is a special case: it + * isn't taken by any driver, but should anyway be + * kept enabled, so we mark it as IGNORE_UNUSED for + * now. + */ + if (!strcmp(descr[n].name, "ddr")) + flags |= CLK_IGNORE_UNUSED; + ctrl->gates[n] = clk_register_gate(NULL, descr[n].name, parent, - 0, base, descr[n].bit_idx, 0, &ctrl->lock); + flags, base, descr[n].bit_idx, 0, &ctrl->lock); WARN_ON(IS_ERR(ctrl->gates[n])); } of_clk_add_provider(np, mvebu_clk_gating_get_src, ctrl); @@ -101,6 +112,53 @@ static void __init mvebu_clk_gating_setup( * SoC specific clock gating control */ +#ifdef CONFIG_MACH_ARMADA_370 +static const struct mvebu_soc_descr __initconst armada_370_gating_descr[] = { + { "audio", NULL, 0 }, + { "pex0_en", NULL, 1 }, + { "pex1_en", NULL, 2 }, + { "ge1", NULL, 3 }, + { "ge0", NULL, 4 }, + { "pex0", NULL, 5 }, + { "pex1", NULL, 9 }, + { "sata0", NULL, 15 }, + { "sdio", NULL, 17 }, + { "tdm", NULL, 25 }, + { "ddr", NULL, 28 }, + { "sata1", NULL, 30 }, + { } +}; +#endif + +#ifdef CONFIG_MACH_ARMADA_XP +static const struct mvebu_soc_descr __initconst armada_xp_gating_descr[] = { + { "audio", NULL, 0 }, + { "ge3", NULL, 1 }, + { "ge2", NULL, 2 }, + { "ge1", NULL, 3 }, + { "ge0", NULL, 4 }, + { "pex0", NULL, 5 }, + { "pex1", NULL, 6 }, + { "pex2", NULL, 7 }, + { "pex3", NULL, 8 }, + { "bp", NULL, 13 }, + { "sata0lnk", NULL, 14 }, + { "sata0", "sata0lnk", 15 }, + { "lcd", NULL, 16 }, + { "sdio", NULL, 17 }, + { "usb0", NULL, 18 }, + { "usb1", NULL, 19 }, + { "usb2", NULL, 20 }, + { "xor0", NULL, 22 }, + { "crypto", NULL, 23 }, + { "tdm", NULL, 25 }, + { "xor1", NULL, 28 }, + { "sata1lnk", NULL, 29 }, + { "sata1", "sata1lnk", 30 }, + { } +}; +#endif + #ifdef CONFIG_ARCH_DOVE static const struct mvebu_soc_descr __initconst dove_gating_descr[] = { { "usb0", NULL, 0 }, @@ -147,6 +205,20 @@ static const struct mvebu_soc_descr __initconst kirkwood_gating_descr[] = { #endif static const __initdata struct of_device_id clk_gating_match[] = { +#ifdef CONFIG_MACH_ARMADA_370 + { + .compatible = "marvell,armada-370-gating-clock", + .data = armada_370_gating_descr, + }, +#endif + +#ifdef CONFIG_MACH_ARMADA_XP + { + .compatible = "marvell,armada-xp-gating-clock", + .data = armada_xp_gating_descr, + }, +#endif + #ifdef CONFIG_ARCH_DOVE { .compatible = "marvell,dove-gating-clock", -- cgit v1.2.3 From 307c2bf467e3682c6df1b8186365224fd2d581d3 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Sat, 17 Nov 2012 15:22:25 +0100 Subject: clocksource: convert time-armada-370-xp to clk framework Signed-off-by: Gregory CLEMENT Tested-by Gregory CLEMENT --- Documentation/devicetree/bindings/arm/armada-370-xp-timer.txt | 1 + arch/arm/boot/dts/armada-370-db.dts | 4 ---- arch/arm/boot/dts/armada-370-xp.dtsi | 1 + drivers/clocksource/time-armada-370-xp.c | 11 ++++++----- 4 files changed, 8 insertions(+), 9 deletions(-) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/armada-370-xp-timer.txt b/Documentation/devicetree/bindings/arm/armada-370-xp-timer.txt index 8b6ea2267c94..64830118b013 100644 --- a/Documentation/devicetree/bindings/arm/armada-370-xp-timer.txt +++ b/Documentation/devicetree/bindings/arm/armada-370-xp-timer.txt @@ -5,6 +5,7 @@ Required properties: - compatible: Should be "marvell,armada-370-xp-timer" - interrupts: Should contain the list of Global Timer interrupts - reg: Should contain the base address of the Global Timer registers +- clocks: clock driving the timer hardware Optional properties: - marvell,timer-25Mhz: Tells whether the Global timer supports the 25 diff --git a/arch/arm/boot/dts/armada-370-db.dts b/arch/arm/boot/dts/armada-370-db.dts index fffd5c2a3041..4a31b0396623 100644 --- a/arch/arm/boot/dts/armada-370-db.dts +++ b/arch/arm/boot/dts/armada-370-db.dts @@ -34,9 +34,5 @@ clock-frequency = <200000000>; status = "okay"; }; - timer@d0020300 { - clock-frequency = <600000000>; - status = "okay"; - }; }; }; diff --git a/arch/arm/boot/dts/armada-370-xp.dtsi b/arch/arm/boot/dts/armada-370-xp.dtsi index 16cc82cdaa81..94b4b9e03571 100644 --- a/arch/arm/boot/dts/armada-370-xp.dtsi +++ b/arch/arm/boot/dts/armada-370-xp.dtsi @@ -62,6 +62,7 @@ compatible = "marvell,armada-370-xp-timer"; reg = <0xd0020300 0x30>; interrupts = <37>, <38>, <39>, <40>; + clocks = <&coreclk 2>; }; addr-decoding@d0020000 { diff --git a/drivers/clocksource/time-armada-370-xp.c b/drivers/clocksource/time-armada-370-xp.c index 4674f94957cd..a4605fd7e303 100644 --- a/drivers/clocksource/time-armada-370-xp.c +++ b/drivers/clocksource/time-armada-370-xp.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -167,7 +168,6 @@ void __init armada_370_xp_timer_init(void) u32 u; struct device_node *np; unsigned int timer_clk; - int ret; np = of_find_compatible_node(NULL, NULL, "marvell,armada-370-xp-timer"); timer_base = of_iomap(np, 0); WARN_ON(!timer_base); @@ -179,13 +179,14 @@ void __init armada_370_xp_timer_init(void) timer_base + TIMER_CTRL_OFF); timer_clk = 25000000; } else { - u32 clk = 0; - ret = of_property_read_u32(np, "clock-frequency", &clk); - WARN_ON(!clk || ret < 0); + unsigned long rate = 0; + struct clk *clk = of_clk_get(np, 0); + WARN_ON(IS_ERR(clk)); + rate = clk_get_rate(clk); u = readl(timer_base + TIMER_CTRL_OFF); writel(u & ~(TIMER0_25MHZ | TIMER1_25MHZ), timer_base + TIMER_CTRL_OFF); - timer_clk = clk / TIMER_DIVIDER; + timer_clk = rate / TIMER_DIVIDER; } /* We use timer 0 as clocksource, and timer 1 for -- cgit v1.2.3 From f7d12ef53ddfd8175933af42bfc477376d19e19e Mon Sep 17 00:00:00 2001 From: Thomas Petazzoni Date: Thu, 15 Nov 2012 16:47:58 +0100 Subject: dma: mv_xor: add Device Tree binding This patch finally adds a Device Tree binding to the mv_xor driver. Thanks to the previous cleanup patches, the Device Tree binding is relatively simply: one DT node per XOR engine, with sub-nodes for each XOR channel of the XOR engine. The binding obviously comes with the necessary documentation. Signed-off-by: Thomas Petazzoni Cc: devicetree-discuss@lists.ozlabs.org --- Documentation/devicetree/bindings/dma/mv-xor.txt | 40 ++++++++++++++++ drivers/dma/mv_xor.c | 58 ++++++++++++++++++++++-- 2 files changed, 94 insertions(+), 4 deletions(-) create mode 100644 Documentation/devicetree/bindings/dma/mv-xor.txt (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt new file mode 100644 index 000000000000..7c6cb7fcecd2 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt @@ -0,0 +1,40 @@ +* Marvell XOR engines + +Required properties: +- compatible: Should be "marvell,orion-xor" +- reg: Should contain registers location and length (two sets) + the first set is the low registers, the second set the high + registers for the XOR engine. +- clocks: pointer to the reference clock + +The DT node must also contains sub-nodes for each XOR channel that the +XOR engine has. Those sub-nodes have the following required +properties: +- interrupts: interrupt of the XOR channel + +And the following optional properties: +- dmacap,memcpy to indicate that the XOR channel is capable of memcpy operations +- dmacap,memset to indicate that the XOR channel is capable of memset operations +- dmacap,xor to indicate that the XOR channel is capable of xor operations + +Example: + +xor@d0060900 { + compatible = "marvell,orion-xor"; + reg = <0xd0060900 0x100 + 0xd0060b00 0x100>; + clocks = <&coreclk 0>; + status = "okay"; + + xor00 { + interrupts = <51>; + dmacap,memcpy; + dmacap,xor; + }; + xor01 { + interrupts = <52>; + dmacap,memcpy; + dmacap,xor; + dmacap,memset; + }; +}; diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index d48245c0b0b0..b1c4edd56ebc 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -26,6 +26,9 @@ #include #include #include +#include +#include +#include #include #include "dmaengine.h" @@ -1278,7 +1281,42 @@ static int mv_xor_probe(struct platform_device *pdev) if (!IS_ERR(xordev->clk)) clk_prepare_enable(xordev->clk); - if (pdata && pdata->channels) { + if (pdev->dev.of_node) { + struct device_node *np; + int i = 0; + + for_each_child_of_node(pdev->dev.of_node, np) { + dma_cap_mask_t cap_mask; + int irq; + + dma_cap_zero(cap_mask); + if (of_property_read_bool(np, "dmacap,memcpy")) + dma_cap_set(DMA_MEMCPY, cap_mask); + if (of_property_read_bool(np, "dmacap,xor")) + dma_cap_set(DMA_XOR, cap_mask); + if (of_property_read_bool(np, "dmacap,memset")) + dma_cap_set(DMA_MEMSET, cap_mask); + if (of_property_read_bool(np, "dmacap,interrupt")) + dma_cap_set(DMA_INTERRUPT, cap_mask); + + irq = irq_of_parse_and_map(np, 0); + if (irq < 0) { + ret = irq; + goto err_channel_add; + } + + xordev->channels[i] = + mv_xor_channel_add(xordev, pdev, i, + cap_mask, irq); + if (IS_ERR(xordev->channels[i])) { + ret = PTR_ERR(xordev->channels[i]); + irq_dispose_mapping(irq); + goto err_channel_add; + } + + i++; + } + } else if (pdata && pdata->channels) { for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) { struct mv_xor_channel_data *cd; int irq; @@ -1309,8 +1347,11 @@ static int mv_xor_probe(struct platform_device *pdev) err_channel_add: for (i = 0; i < MV_XOR_MAX_CHANNELS; i++) - if (xordev->channels[i]) + if (xordev->channels[i]) { + if (pdev->dev.of_node) + irq_dispose_mapping(xordev->channels[i]->irq); mv_xor_channel_remove(xordev->channels[i]); + } clk_disable_unprepare(xordev->clk); clk_put(xordev->clk); @@ -1335,12 +1376,21 @@ static int mv_xor_remove(struct platform_device *pdev) return 0; } +#ifdef CONFIG_OF +static struct of_device_id mv_xor_dt_ids[] __devinitdata = { + { .compatible = "marvell,orion-xor", }, + {}, +}; +MODULE_DEVICE_TABLE(of, mv_xor_dt_ids); +#endif + static struct platform_driver mv_xor_driver = { .probe = mv_xor_probe, .remove = mv_xor_remove, .driver = { - .owner = THIS_MODULE, - .name = MV_XOR_NAME, + .owner = THIS_MODULE, + .name = MV_XOR_NAME, + .of_match_table = of_match_ptr(mv_xor_dt_ids), }, }; -- cgit v1.2.3 From 189dd62642c9819005cf37d3a9e441d203112bd2 Mon Sep 17 00:00:00 2001 From: Thomas Petazzoni Date: Mon, 19 Nov 2012 14:15:25 +0100 Subject: net: mvneta: add clk support Now that the Armada 370/XP platform has gained proper integration with the clock framework, we add clk support in the Marvell Armada 370/XP Ethernet driver. Since the existing Device Tree binding that exposes a 'clock-frequency' property has never been exposed in any stable kernel release, we take the freedom of removing this property to replace it with the standard 'clocks' clock pointer property. The Device Tree binding documentation is updated accordingly. Signed-off-by: Thomas Petazzoni --- .../bindings/net/marvell-armada-370-neta.txt | 4 +-- drivers/net/ethernet/marvell/mvneta.c | 31 ++++++++++++++-------- 2 files changed, 22 insertions(+), 13 deletions(-) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt index c4e87f0e450e..859a6fa7569c 100644 --- a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt +++ b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt @@ -8,7 +8,7 @@ Required properties: property, a single integer). - phy-mode: The interface between the SoC and the PHY (a string that of_get_phy_mode() can understand) -- clock-frequency: frequency of the peripheral clock of the SoC. +- clocks: a pointer to the reference clock for this device. Example: @@ -16,7 +16,7 @@ ethernet@d0070000 { compatible = "marvell,armada-370-neta"; reg = <0xd0070000 0x2500>; interrupts = <8>; - clock-frequency = <250000000>; + clocks = <&gate_clk 4>; status = "okay"; phy = <&phy0>; phy-mode = "rgmii-id"; diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index d9dadee6ab79..17b0a4198c88 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -29,6 +29,7 @@ #include #include #include +#include /* Registers */ #define MVNETA_RXQ_CONFIG_REG(q) (0x1400 + ((q) << 2)) @@ -242,7 +243,7 @@ struct mvneta_port { int weight; /* Core clock */ - unsigned int clk_rate_hz; + struct clk *clk; u8 mcast_count[256]; u16 tx_ring_size; u16 rx_ring_size; @@ -1029,7 +1030,11 @@ static void mvneta_rx_pkts_coal_set(struct mvneta_port *pp, static void mvneta_rx_time_coal_set(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, u32 value) { - u32 val = (pp->clk_rate_hz / 1000000) * value; + u32 val; + unsigned long clk_rate; + + clk_rate = clk_get_rate(pp->clk); + val = (clk_rate / 1000000) * value; mvreg_write(pp, MVNETA_RXQ_TIME_COAL_REG(rxq->id), val); rxq->time_coal = value; @@ -2671,7 +2676,7 @@ static int __devinit mvneta_probe(struct platform_device *pdev) const struct mbus_dram_target_info *dram_target_info; struct device_node *dn = pdev->dev.of_node; struct device_node *phy_node; - u32 phy_addr, clk_rate_hz; + u32 phy_addr; struct mvneta_port *pp; struct net_device *dev; const char *mac_addr; @@ -2710,12 +2715,6 @@ static int __devinit mvneta_probe(struct platform_device *pdev) goto err_free_irq; } - if (of_property_read_u32(dn, "clock-frequency", &clk_rate_hz) != 0) { - dev_err(&pdev->dev, "could not read clock-frequency\n"); - err = -EINVAL; - goto err_free_irq; - } - mac_addr = of_get_mac_address(dn); if (!mac_addr || !is_valid_ether_addr(mac_addr)) @@ -2736,7 +2735,6 @@ static int __devinit mvneta_probe(struct platform_device *pdev) clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); pp->weight = MVNETA_RX_POLL_WEIGHT; - pp->clk_rate_hz = clk_rate_hz; pp->phy_node = phy_node; pp->phy_interface = phy_mode; @@ -2746,6 +2744,14 @@ static int __devinit mvneta_probe(struct platform_device *pdev) goto err_free_irq; } + pp->clk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(pp->clk)) { + err = PTR_ERR(pp->clk); + goto err_unmap; + } + + clk_prepare_enable(pp->clk); + pp->tx_done_timer.data = (unsigned long)dev; pp->tx_ring_size = MVNETA_MAX_TXD; @@ -2757,7 +2763,7 @@ static int __devinit mvneta_probe(struct platform_device *pdev) err = mvneta_init(pp, phy_addr); if (err < 0) { dev_err(&pdev->dev, "can't init eth hal\n"); - goto err_unmap; + goto err_clk; } mvneta_port_power_up(pp, phy_mode); @@ -2785,6 +2791,8 @@ static int __devinit mvneta_probe(struct platform_device *pdev) err_deinit: mvneta_deinit(pp); +err_clk: + clk_disable_unprepare(pp->clk); err_unmap: iounmap(pp->base); err_free_irq: @@ -2802,6 +2810,7 @@ static int __devexit mvneta_remove(struct platform_device *pdev) unregister_netdev(dev); mvneta_deinit(pp); + clk_disable_unprepare(pp->clk); iounmap(pp->base); irq_dispose_mapping(dev->irq); free_netdev(dev); -- cgit v1.2.3 From 009f13159bfdccd6e06fe3b62a39fee6dce26c39 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Thu, 2 Aug 2012 11:16:29 +0300 Subject: arm: mvebu: Add support for coherency fabric in mach-mvebu The Armada 370 and Armada XP SOCs have a coherency fabric unit which is responsible for ensuring hardware coherency between all CPUs and between CPUs and I/O masters. This patch provides the basic support needed for SMP. Signed-off-by: Yehuda Yitschak Signed-off-by: Gregory CLEMENT Reviewed-by: Will Deacon --- .../devicetree/bindings/arm/coherency-fabric.txt | 16 +++++ arch/arm/boot/dts/armada-370-xp.dtsi | 5 ++ arch/arm/mach-mvebu/Makefile | 2 +- arch/arm/mach-mvebu/coherency.c | 82 ++++++++++++++++++++++ arch/arm/mach-mvebu/coherency.h | 24 +++++++ arch/arm/mach-mvebu/coherency_ll.S | 49 +++++++++++++ arch/arm/mach-mvebu/common.h | 1 + 7 files changed, 178 insertions(+), 1 deletion(-) create mode 100644 Documentation/devicetree/bindings/arm/coherency-fabric.txt create mode 100644 arch/arm/mach-mvebu/coherency.c create mode 100644 arch/arm/mach-mvebu/coherency.h create mode 100644 arch/arm/mach-mvebu/coherency_ll.S (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/coherency-fabric.txt b/Documentation/devicetree/bindings/arm/coherency-fabric.txt new file mode 100644 index 000000000000..2bfbf67dd77e --- /dev/null +++ b/Documentation/devicetree/bindings/arm/coherency-fabric.txt @@ -0,0 +1,16 @@ +Coherency fabric +---------------- +Available on Marvell SOCs: Armada 370 and Armada XP + +Required properties: + +- compatible: "marvell,coherency-fabric" +- reg: Should contain,coherency fabric registers location and length. + +Example: + +coherency-fabric@d0020200 { + compatible = "marvell,coherency-fabric"; + reg = <0xd0020200 0xb0>; +}; + diff --git a/arch/arm/boot/dts/armada-370-xp.dtsi b/arch/arm/boot/dts/armada-370-xp.dtsi index 94b4b9e03571..b0d075b50f29 100644 --- a/arch/arm/boot/dts/armada-370-xp.dtsi +++ b/arch/arm/boot/dts/armada-370-xp.dtsi @@ -36,6 +36,11 @@ interrupt-controller; }; + coherency-fabric@d0020200 { + compatible = "marvell,coherency-fabric"; + reg = <0xd0020200 0xb0>; + }; + soc { #address-cells = <1>; #size-cells = <1>; diff --git a/arch/arm/mach-mvebu/Makefile b/arch/arm/mach-mvebu/Makefile index 57f996b6aa0e..5ce4b42c2697 100644 --- a/arch/arm/mach-mvebu/Makefile +++ b/arch/arm/mach-mvebu/Makefile @@ -2,4 +2,4 @@ ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \ -I$(srctree)/arch/arm/plat-orion/include obj-y += system-controller.o -obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o +obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o coherency.o coherency_ll.o diff --git a/arch/arm/mach-mvebu/coherency.c b/arch/arm/mach-mvebu/coherency.c new file mode 100644 index 000000000000..596ee66a9cc4 --- /dev/null +++ b/arch/arm/mach-mvebu/coherency.c @@ -0,0 +1,82 @@ +/* + * Coherency fabric (Aurora) support for Armada 370 and XP platforms. + * + * Copyright (C) 2012 Marvell + * + * Yehuda Yitschak + * Gregory Clement + * Thomas Petazzoni + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + * + * The Armada 370 and Armada XP SOCs have a coherency fabric which is + * responsible for ensuring hardware coherency between all CPUs and between + * CPUs and I/O masters. This file initializes the coherency fabric and + * supplies basic routines for configuring and controlling hardware coherency + */ + +#include +#include +#include +#include +#include +#include +#include "armada-370-xp.h" + +/* + * Some functions in this file are called very early during SMP + * initialization. At that time the device tree framework is not yet + * ready, and it is not possible to get the register address to + * ioremap it. That's why the pointer below is given with an initial + * value matching its virtual mapping + */ +static void __iomem *coherency_base = ARMADA_370_XP_REGS_VIRT_BASE + 0x20200; + +/* Coherency fabric registers */ +#define COHERENCY_FABRIC_CFG_OFFSET 0x4 + +static struct of_device_id of_coherency_table[] = { + {.compatible = "marvell,coherency-fabric"}, + { /* end of list */ }, +}; + +#ifdef CONFIG_SMP +int coherency_get_cpu_count(void) +{ + int reg, cnt; + + reg = readl(coherency_base + COHERENCY_FABRIC_CFG_OFFSET); + cnt = (reg & 0xF) + 1; + + return cnt; +} +#endif + +/* Function defined in coherency_ll.S */ +int ll_set_cpu_coherent(void __iomem *base_addr, unsigned int hw_cpu_id); + +int set_cpu_coherent(unsigned int hw_cpu_id, int smp_group_id) +{ + if (!coherency_base) { + pr_warn("Can't make CPU %d cache coherent.\n", hw_cpu_id); + pr_warn("Coherency fabric is not initialized\n"); + return 1; + } + + return ll_set_cpu_coherent(coherency_base, hw_cpu_id); +} + +int __init coherency_init(void) +{ + struct device_node *np; + + np = of_find_matching_node(NULL, of_coherency_table); + if (np) { + pr_info("Initializing Coherency fabric\n"); + coherency_base = of_iomap(np, 0); + } + + return 0; +} diff --git a/arch/arm/mach-mvebu/coherency.h b/arch/arm/mach-mvebu/coherency.h new file mode 100644 index 000000000000..2f428137f6fe --- /dev/null +++ b/arch/arm/mach-mvebu/coherency.h @@ -0,0 +1,24 @@ +/* + * arch/arm/mach-mvebu/include/mach/coherency.h + * + * + * Coherency fabric (Aurora) support for Armada 370 and XP platforms. + * + * Copyright (C) 2012 Marvell + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef __MACH_370_XP_COHERENCY_H +#define __MACH_370_XP_COHERENCY_H + +#ifdef CONFIG_SMP +int coherency_get_cpu_count(void); +#endif + +int set_cpu_coherent(int cpu_id, int smp_group_id); +int coherency_init(void); + +#endif /* __MACH_370_XP_COHERENCY_H */ diff --git a/arch/arm/mach-mvebu/coherency_ll.S b/arch/arm/mach-mvebu/coherency_ll.S new file mode 100644 index 000000000000..53e8391192cd --- /dev/null +++ b/arch/arm/mach-mvebu/coherency_ll.S @@ -0,0 +1,49 @@ +/* + * Coherency fabric: low level functions + * + * Copyright (C) 2012 Marvell + * + * Gregory CLEMENT + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + * + * This file implements the assembly function to add a CPU to the + * coherency fabric. This function is called by each of the secondary + * CPUs during their early boot in an SMP kernel, this why this + * function have to callable from assembly. It can also be called by a + * primary CPU from C code during its boot. + */ + +#include +#define ARMADA_XP_CFB_CTL_REG_OFFSET 0x0 +#define ARMADA_XP_CFB_CFG_REG_OFFSET 0x4 + + .text +/* + * r0: Coherency fabric base register address + * r1: HW CPU id + */ +ENTRY(ll_set_cpu_coherent) + /* Create bit by cpu index */ + mov r3, #(1 << 24) + lsl r1, r3, r1 + + /* Add CPU to SMP group - Atomic */ + add r3, r0, #ARMADA_XP_CFB_CTL_REG_OFFSET + ldr r2, [r3] + orr r2, r2, r1 + str r2, [r3] + + /* Enable coherency on CPU - Atomic */ + add r3, r0, #ARMADA_XP_CFB_CFG_REG_OFFSET + ldr r2, [r3] + orr r2, r2, r1 + str r2, [r3] + + dsb + + mov r0, #0 + mov pc, lr +ENDPROC(ll_set_cpu_coherent) diff --git a/arch/arm/mach-mvebu/common.h b/arch/arm/mach-mvebu/common.h index 02f89eaa25fe..ba6b62a42f52 100644 --- a/arch/arm/mach-mvebu/common.h +++ b/arch/arm/mach-mvebu/common.h @@ -20,4 +20,5 @@ void mvebu_restart(char mode, const char *cmd); void armada_370_xp_init_irq(void); void armada_370_xp_handle_irq(struct pt_regs *regs); +int armada_370_xp_coherency_init(void); #endif -- cgit v1.2.3 From 7444dad2409afd94c08875e961ca61c5999cd606 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Thu, 2 Aug 2012 11:17:51 +0300 Subject: arm: mvebu: Add initial support for power managmement service unit The Armada 370 and Armada XP SOCs have a power management service unit which is responsible for powering down and waking up CPUs and other SOC units. This patch adds support for this unit. Signed-off-by: Yehuda Yitschak Signed-off-by: Gregory CLEMENT --- .../devicetree/bindings/arm/armada-370-xp-pmsu.txt | 20 ++++++ arch/arm/boot/dts/armada-xp.dtsi | 6 ++ arch/arm/mach-mvebu/Makefile | 2 +- arch/arm/mach-mvebu/common.h | 1 + arch/arm/mach-mvebu/pmsu.c | 75 ++++++++++++++++++++++ arch/arm/mach-mvebu/pmsu.h | 16 +++++ 6 files changed, 119 insertions(+), 1 deletion(-) create mode 100644 Documentation/devicetree/bindings/arm/armada-370-xp-pmsu.txt create mode 100644 arch/arm/mach-mvebu/pmsu.c create mode 100644 arch/arm/mach-mvebu/pmsu.h (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/armada-370-xp-pmsu.txt b/Documentation/devicetree/bindings/arm/armada-370-xp-pmsu.txt new file mode 100644 index 000000000000..926b4d6aae7e --- /dev/null +++ b/Documentation/devicetree/bindings/arm/armada-370-xp-pmsu.txt @@ -0,0 +1,20 @@ +Power Management Service Unit(PMSU) +----------------------------------- +Available on Marvell SOCs: Armada 370 and Armada XP + +Required properties: + +- compatible: "marvell,armada-370-xp-pmsu" + +- reg: Should contain PMSU registers location and length. First pair + for the per-CPU SW Reset Control registers, second pair for the + Power Management Service Unit. + +Example: + +armada-370-xp-pmsu@d0022000 { + compatible = "marvell,armada-370-xp-pmsu"; + reg = <0xd0022100 0x430>, + <0xd0020800 0x20>; +}; + diff --git a/arch/arm/boot/dts/armada-xp.dtsi b/arch/arm/boot/dts/armada-xp.dtsi index f51554e80009..1f95e227053b 100644 --- a/arch/arm/boot/dts/armada-xp.dtsi +++ b/arch/arm/boot/dts/armada-xp.dtsi @@ -27,6 +27,12 @@ <0xd0021870 0x58>; }; + armada-370-xp-pmsu@d0022000 { + compatible = "marvell,armada-370-xp-pmsu"; + reg = <0xd0022100 0x430>, + <0xd0020800 0x20>; + }; + soc { serial@d0012200 { compatible = "ns16550"; diff --git a/arch/arm/mach-mvebu/Makefile b/arch/arm/mach-mvebu/Makefile index 5ce4b42c2697..2e3ec11c51e6 100644 --- a/arch/arm/mach-mvebu/Makefile +++ b/arch/arm/mach-mvebu/Makefile @@ -2,4 +2,4 @@ ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \ -I$(srctree)/arch/arm/plat-orion/include obj-y += system-controller.o -obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o coherency.o coherency_ll.o +obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o coherency.o coherency_ll.o pmsu.o diff --git a/arch/arm/mach-mvebu/common.h b/arch/arm/mach-mvebu/common.h index ba6b62a42f52..9285d0496651 100644 --- a/arch/arm/mach-mvebu/common.h +++ b/arch/arm/mach-mvebu/common.h @@ -21,4 +21,5 @@ void armada_370_xp_init_irq(void); void armada_370_xp_handle_irq(struct pt_regs *regs); int armada_370_xp_coherency_init(void); +int armada_370_xp_pmsu_init(void); #endif diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c new file mode 100644 index 000000000000..3cc4bef6401c --- /dev/null +++ b/arch/arm/mach-mvebu/pmsu.c @@ -0,0 +1,75 @@ +/* + * Power Management Service Unit(PMSU) support for Armada 370/XP platforms. + * + * Copyright (C) 2012 Marvell + * + * Yehuda Yitschak + * Gregory Clement + * Thomas Petazzoni + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + * + * The Armada 370 and Armada XP SOCs have a power management service + * unit which is responsible for powering down and waking up CPUs and + * other SOC units + */ + +#include +#include +#include +#include +#include +#include + +static void __iomem *pmsu_mp_base; +static void __iomem *pmsu_reset_base; + +#define PMSU_BOOT_ADDR_REDIRECT_OFFSET(cpu) ((cpu * 0x100) + 0x24) +#define PMSU_RESET_CTL_OFFSET(cpu) (cpu * 0x8) + +static struct of_device_id of_pmsu_table[] = { + {.compatible = "marvell,armada-370-xp-pmsu"}, + { /* end of list */ }, +}; + +#ifdef CONFIG_SMP +int armada_xp_boot_cpu(unsigned int cpu_id, void *boot_addr) +{ + int reg, hw_cpu; + + if (!pmsu_mp_base || !pmsu_reset_base) { + pr_warn("Can't boot CPU. PMSU is uninitialized\n"); + return 1; + } + + hw_cpu = cpu_logical_map(cpu_id); + + writel(virt_to_phys(boot_addr), pmsu_mp_base + + PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu)); + + /* Release CPU from reset by clearing reset bit*/ + reg = readl(pmsu_reset_base + PMSU_RESET_CTL_OFFSET(hw_cpu)); + reg &= (~0x1); + writel(reg, pmsu_reset_base + PMSU_RESET_CTL_OFFSET(hw_cpu)); + + return 0; +} +#endif + +int __init armada_370_xp_pmsu_init(void) +{ + struct device_node *np; + + np = of_find_matching_node(NULL, of_pmsu_table); + if (np) { + pr_info("Initializing Power Management Service Unit\n"); + pmsu_mp_base = of_iomap(np, 0); + pmsu_reset_base = of_iomap(np, 1); + } + + return 0; +} + +early_initcall(armada_370_xp_pmsu_init); diff --git a/arch/arm/mach-mvebu/pmsu.h b/arch/arm/mach-mvebu/pmsu.h new file mode 100644 index 000000000000..07a737c6b95d --- /dev/null +++ b/arch/arm/mach-mvebu/pmsu.h @@ -0,0 +1,16 @@ +/* + * Power Management Service Unit (PMSU) support for Armada 370/XP platforms. + * + * Copyright (C) 2012 Marvell + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef __MACH_MVEBU_PMSU_H +#define __MACH_MVEBU_PMSU_H + +int armada_xp_boot_cpu(unsigned int cpu_id, void *phys_addr); + +#endif /* __MACH_370_XP_PMSU_H */ -- cgit v1.2.3 From 344e873e5657e8dc0631e4d1d42b69f7d625b02c Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Thu, 2 Aug 2012 11:19:12 +0300 Subject: arm: mvebu: Add IPI support via doorbells This patch enhances the IRQ controller driver to add support for Inter-Processor-Interrupts that are needed to enable SMP support. Signed-off-by: Yehuda Yitschak Signed-off-by: Gregory CLEMENT --- .../devicetree/bindings/arm/armada-370-xp-mpic.txt | 12 ++- arch/arm/boot/dts/armada-xp.dtsi | 2 +- arch/arm/mach-mvebu/armada-370-xp.h | 7 ++ arch/arm/mach-mvebu/irq-armada-370-xp.c | 92 ++++++++++++++++++++-- 4 files changed, 103 insertions(+), 10 deletions(-) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/armada-370-xp-mpic.txt b/Documentation/devicetree/bindings/arm/armada-370-xp-mpic.txt index 70c0dc5f00ed..61df564c0d23 100644 --- a/Documentation/devicetree/bindings/arm/armada-370-xp-mpic.txt +++ b/Documentation/devicetree/bindings/arm/armada-370-xp-mpic.txt @@ -6,9 +6,15 @@ Required properties: - interrupt-controller: Identifies the node as an interrupt controller. - #interrupt-cells: The number of cells to define the interrupts. Should be 1. The cell is the IRQ number + - reg: Should contain PMIC registers location and length. First pair for the main interrupt registers, second pair for the per-CPU - interrupt registers + interrupt registers. For this last pair, to be compliant with SMP + support, the "virtual" must be use (For the record, these registers + automatically map to the interrupt controller registers of the + current CPU) + + Example: @@ -18,6 +24,6 @@ Example: #address-cells = <1>; #size-cells = <1>; interrupt-controller; - reg = <0xd0020000 0x1000>, - <0xd0021000 0x1000>; + reg = <0xd0020a00 0x1d0>, + <0xd0021070 0x58>; }; diff --git a/arch/arm/boot/dts/armada-xp.dtsi b/arch/arm/boot/dts/armada-xp.dtsi index 1f95e227053b..e6db2b7e2925 100644 --- a/arch/arm/boot/dts/armada-xp.dtsi +++ b/arch/arm/boot/dts/armada-xp.dtsi @@ -24,7 +24,7 @@ mpic: interrupt-controller@d0020000 { reg = <0xd0020a00 0x1d0>, - <0xd0021870 0x58>; + <0xd0021070 0x58>; }; armada-370-xp-pmsu@d0022000 { diff --git a/arch/arm/mach-mvebu/armada-370-xp.h b/arch/arm/mach-mvebu/armada-370-xp.h index aac9bebc6b03..c6a7d74fddfe 100644 --- a/arch/arm/mach-mvebu/armada-370-xp.h +++ b/arch/arm/mach-mvebu/armada-370-xp.h @@ -19,4 +19,11 @@ #define ARMADA_370_XP_REGS_VIRT_BASE IOMEM(0xfeb00000) #define ARMADA_370_XP_REGS_SIZE SZ_1M +#ifdef CONFIG_SMP +#include + +void armada_mpic_send_doorbell(const struct cpumask *mask, unsigned int irq); +void armada_xp_mpic_smp_cpu_init(void); +#endif + #endif /* __MACH_ARMADA_370_XP_H */ diff --git a/arch/arm/mach-mvebu/irq-armada-370-xp.c b/arch/arm/mach-mvebu/irq-armada-370-xp.c index 5f5f9394b6b2..549b6846f940 100644 --- a/arch/arm/mach-mvebu/irq-armada-370-xp.c +++ b/arch/arm/mach-mvebu/irq-armada-370-xp.c @@ -24,6 +24,7 @@ #include #include #include +#include /* Interrupt Controller Registers Map */ #define ARMADA_370_XP_INT_SET_MASK_OFFS (0x48) @@ -35,6 +36,12 @@ #define ARMADA_370_XP_CPU_INTACK_OFFS (0x44) +#define ARMADA_370_XP_SW_TRIG_INT_OFFS (0x4) +#define ARMADA_370_XP_IN_DRBEL_MSK_OFFS (0xc) +#define ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS (0x8) + +#define ACTIVE_DOORBELLS (8) + static void __iomem *per_cpu_int_base; static void __iomem *main_int_base; static struct irq_domain *armada_370_xp_mpic_domain; @@ -51,11 +58,22 @@ static void armada_370_xp_irq_unmask(struct irq_data *d) per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); } +#ifdef CONFIG_SMP +static int armada_xp_set_affinity(struct irq_data *d, + const struct cpumask *mask_val, bool force) +{ + return 0; +} +#endif + static struct irq_chip armada_370_xp_irq_chip = { .name = "armada_370_xp_irq", .irq_mask = armada_370_xp_irq_mask, .irq_mask_ack = armada_370_xp_irq_mask, .irq_unmask = armada_370_xp_irq_unmask, +#ifdef CONFIG_SMP + .irq_set_affinity = armada_xp_set_affinity, +#endif }; static int armada_370_xp_mpic_irq_map(struct irq_domain *h, @@ -72,6 +90,41 @@ static int armada_370_xp_mpic_irq_map(struct irq_domain *h, return 0; } +#ifdef CONFIG_SMP +void armada_mpic_send_doorbell(const struct cpumask *mask, unsigned int irq) +{ + int cpu; + unsigned long map = 0; + + /* Convert our logical CPU mask into a physical one. */ + for_each_cpu(cpu, mask) + map |= 1 << cpu_logical_map(cpu); + + /* + * Ensure that stores to Normal memory are visible to the + * other CPUs before issuing the IPI. + */ + dsb(); + + /* submit softirq */ + writel((map << 8) | irq, main_int_base + + ARMADA_370_XP_SW_TRIG_INT_OFFS); +} + +void armada_xp_mpic_smp_cpu_init(void) +{ + /* Clear pending IPIs */ + writel(0, per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS); + + /* Enable first 8 IPIs */ + writel((1 << ACTIVE_DOORBELLS) - 1, per_cpu_int_base + + ARMADA_370_XP_IN_DRBEL_MSK_OFFS); + + /* Unmask IPI interrupt */ + writel(0, per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS); +} +#endif /* CONFIG_SMP */ + static struct irq_domain_ops armada_370_xp_mpic_irq_ops = { .map = armada_370_xp_mpic_irq_map, .xlate = irq_domain_xlate_onecell, @@ -91,13 +144,18 @@ static int __init armada_370_xp_mpic_of_init(struct device_node *node, control = readl(main_int_base + ARMADA_370_XP_INT_CONTROL); armada_370_xp_mpic_domain = - irq_domain_add_linear(node, (control >> 2) & 0x3ff, - &armada_370_xp_mpic_irq_ops, NULL); + irq_domain_add_linear(node, (control >> 2) & 0x3ff, + &armada_370_xp_mpic_irq_ops, NULL); if (!armada_370_xp_mpic_domain) panic("Unable to add Armada_370_Xp MPIC irq domain (DT)\n"); irq_set_default_host(armada_370_xp_mpic_domain); + +#ifdef CONFIG_SMP + armada_xp_mpic_smp_cpu_init(); +#endif + return 0; } @@ -111,14 +169,36 @@ asmlinkage void __exception_irq_entry armada_370_xp_handle_irq(struct pt_regs ARMADA_370_XP_CPU_INTACK_OFFS); irqnr = irqstat & 0x3FF; - if (irqnr < 1023) { - irqnr = - irq_find_mapping(armada_370_xp_mpic_domain, irqnr); + if (irqnr > 1022) + break; + + if (irqnr >= 8) { + irqnr = irq_find_mapping(armada_370_xp_mpic_domain, + irqnr); handle_IRQ(irqnr, regs); continue; } +#ifdef CONFIG_SMP + /* IPI Handling */ + if (irqnr == 0) { + u32 ipimask, ipinr; + + ipimask = readl_relaxed(per_cpu_int_base + + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS) + & 0xFF; + + writel(0x0, per_cpu_int_base + + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS); + + /* Handle all pending doorbells */ + for (ipinr = 0; ipinr < ACTIVE_DOORBELLS; ipinr++) { + if (ipimask & (0x1 << ipinr)) + handle_IPI(ipinr, regs); + } + continue; + } +#endif - break; } while (1); } -- cgit v1.2.3 From e60304f8cb7bb545e79fe62d9b9762460c254ec2 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Fri, 12 Oct 2012 19:20:36 +0200 Subject: arm: mvebu: Add hardware I/O Coherency support Armada 370 and XP come with an unit called coherency fabric. This unit allows to use the Armada 370/XP as a nearly coherent architecture. The coherency mechanism uses snoop filters to ensure the coherency between caches, DRAM and devices. This mechanism needs a synchronization barrier which guarantees that all the memory writes initiated by the devices have reached their target and do not reside in intermediate write buffers. That's why the architecture is not totally coherent and we need to provide our own functions for some DMA operations. Beside the use of the coherency fabric, the device units will have to set the attribute flag of the decoding address window to select the accurate coherency process for the memory transaction. This is done each device driver programs the DRAM address windows. The value of the attribute set by the driver is retrieved through the orion_addr_map_cfg struct filled during the early initialization of the platform. Signed-off-by: Gregory CLEMENT Reviewed-by: Yehuda Yitschak Acked-by: Marek Szyprowski --- .../devicetree/bindings/arm/coherency-fabric.txt | 9 ++- arch/arm/boot/dts/armada-370-xp.dtsi | 3 +- arch/arm/mach-mvebu/addr-map.c | 3 + arch/arm/mach-mvebu/coherency.c | 73 ++++++++++++++++++++++ 4 files changed, 85 insertions(+), 3 deletions(-) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/coherency-fabric.txt b/Documentation/devicetree/bindings/arm/coherency-fabric.txt index 2bfbf67dd77e..17d8cd107559 100644 --- a/Documentation/devicetree/bindings/arm/coherency-fabric.txt +++ b/Documentation/devicetree/bindings/arm/coherency-fabric.txt @@ -5,12 +5,17 @@ Available on Marvell SOCs: Armada 370 and Armada XP Required properties: - compatible: "marvell,coherency-fabric" -- reg: Should contain,coherency fabric registers location and length. + +- reg: Should contain coherency fabric registers location and + length. First pair for the coherency fabric registers, second pair + for the per-CPU fabric registers registers. Example: coherency-fabric@d0020200 { compatible = "marvell,coherency-fabric"; - reg = <0xd0020200 0xb0>; + reg = <0xd0020200 0xb0>, + <0xd0021810 0x1c>; + }; diff --git a/arch/arm/boot/dts/armada-370-xp.dtsi b/arch/arm/boot/dts/armada-370-xp.dtsi index b0d075b50f29..98a6b26a7dc8 100644 --- a/arch/arm/boot/dts/armada-370-xp.dtsi +++ b/arch/arm/boot/dts/armada-370-xp.dtsi @@ -38,7 +38,8 @@ coherency-fabric@d0020200 { compatible = "marvell,coherency-fabric"; - reg = <0xd0020200 0xb0>; + reg = <0xd0020200 0xb0>, + <0xd0021810 0x1c>; }; soc { diff --git a/arch/arm/mach-mvebu/addr-map.c b/arch/arm/mach-mvebu/addr-map.c index fe454a4430be..595f6b722a8f 100644 --- a/arch/arm/mach-mvebu/addr-map.c +++ b/arch/arm/mach-mvebu/addr-map.c @@ -108,6 +108,9 @@ static int __init armada_setup_cpu_mbus(void) addr_map_cfg.bridge_virt_base = mbus_unit_addr_decoding_base; + if (of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric")) + addr_map_cfg.hw_io_coherency = 1; + /* * Disable, clear and configure windows. */ diff --git a/arch/arm/mach-mvebu/coherency.c b/arch/arm/mach-mvebu/coherency.c index 596ee66a9cc4..8278960066c3 100644 --- a/arch/arm/mach-mvebu/coherency.c +++ b/arch/arm/mach-mvebu/coherency.c @@ -22,6 +22,8 @@ #include #include #include +#include +#include #include #include "armada-370-xp.h" @@ -33,10 +35,13 @@ * value matching its virtual mapping */ static void __iomem *coherency_base = ARMADA_370_XP_REGS_VIRT_BASE + 0x20200; +static void __iomem *coherency_cpu_base; /* Coherency fabric registers */ #define COHERENCY_FABRIC_CFG_OFFSET 0x4 +#define IO_SYNC_BARRIER_CTL_OFFSET 0x0 + static struct of_device_id of_coherency_table[] = { {.compatible = "marvell,coherency-fabric"}, { /* end of list */ }, @@ -68,6 +73,70 @@ int set_cpu_coherent(unsigned int hw_cpu_id, int smp_group_id) return ll_set_cpu_coherent(coherency_base, hw_cpu_id); } +static inline void mvebu_hwcc_sync_io_barrier(void) +{ + writel(0x1, coherency_cpu_base + IO_SYNC_BARRIER_CTL_OFFSET); + while (readl(coherency_cpu_base + IO_SYNC_BARRIER_CTL_OFFSET) & 0x1); +} + +static dma_addr_t mvebu_hwcc_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + if (dir != DMA_TO_DEVICE) + mvebu_hwcc_sync_io_barrier(); + return pfn_to_dma(dev, page_to_pfn(page)) + offset; +} + + +static void mvebu_hwcc_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + if (dir != DMA_TO_DEVICE) + mvebu_hwcc_sync_io_barrier(); +} + +static void mvebu_hwcc_dma_sync(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir) +{ + if (dir != DMA_TO_DEVICE) + mvebu_hwcc_sync_io_barrier(); +} + +static struct dma_map_ops mvebu_hwcc_dma_ops = { + .alloc = arm_dma_alloc, + .free = arm_dma_free, + .mmap = arm_dma_mmap, + .map_page = mvebu_hwcc_dma_map_page, + .unmap_page = mvebu_hwcc_dma_unmap_page, + .get_sgtable = arm_dma_get_sgtable, + .map_sg = arm_dma_map_sg, + .unmap_sg = arm_dma_unmap_sg, + .sync_single_for_cpu = mvebu_hwcc_dma_sync, + .sync_single_for_device = mvebu_hwcc_dma_sync, + .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, + .sync_sg_for_device = arm_dma_sync_sg_for_device, + .set_dma_mask = arm_dma_set_mask, +}; + +static int mvebu_hwcc_platform_notifier(struct notifier_block *nb, + unsigned long event, void *__dev) +{ + struct device *dev = __dev; + + if (event != BUS_NOTIFY_ADD_DEVICE) + return NOTIFY_DONE; + set_dma_ops(dev, &mvebu_hwcc_dma_ops); + + return NOTIFY_OK; +} + +static struct notifier_block mvebu_hwcc_platform_nb = { + .notifier_call = mvebu_hwcc_platform_notifier, +}; + int __init coherency_init(void) { struct device_node *np; @@ -76,6 +145,10 @@ int __init coherency_init(void) if (np) { pr_info("Initializing Coherency fabric\n"); coherency_base = of_iomap(np, 0); + coherency_cpu_base = of_iomap(np, 1); + set_cpu_coherent(cpu_logical_map(smp_processor_id()), 0); + bus_register_notifier(&platform_bus_type, + &mvebu_hwcc_platform_nb); } return 0; -- cgit v1.2.3 From 3ee11aef75db51c69cb8cb91dd01afb28036f1b5 Mon Sep 17 00:00:00 2001 From: Gregory CLEMENT Date: Wed, 26 Sep 2012 18:02:50 +0200 Subject: arm: l2x0: add aurora related properties to OF binding Aurora is a L2 Cache Controller designed to be compatible with the L2x0 Cache Controller. L2X0 OF bindings are extended to support some specificity of Aurora (no cache id part number available through hardware, always write through mode, choice between outer cache and system cache). Signed-off-by: Gregory CLEMENT Signed-off-by: Yehuda Yitschak Tested-and-reviewed-by: Lior Amsalem Acked-by: Arnd Bergmann Cc: Grant Likely Cc: Rob Herring Cc: Russell King Cc: Barry Song <21cnbao@gmail.com> Cc: Will Deacon Cc: Arnd Bergmann Cc: Olof Johansson Signed-off-by: Jason Cooper --- Documentation/devicetree/bindings/arm/l2cc.txt | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'Documentation') diff --git a/Documentation/devicetree/bindings/arm/l2cc.txt b/Documentation/devicetree/bindings/arm/l2cc.txt index 7ca52161e7ab..76b0ee6ee9a4 100644 --- a/Documentation/devicetree/bindings/arm/l2cc.txt +++ b/Documentation/devicetree/bindings/arm/l2cc.txt @@ -10,6 +10,12 @@ Required properties: "arm,pl310-cache" "arm,l220-cache" "arm,l210-cache" + "marvell,aurora-system-cache": Marvell Controller designed to be + compatible with the ARM one, with system cache mode (meaning + maintenance operations on L1 are broadcasted to the L2 and L2 + performs the same operation). + "marvell,"aurora-outer-cache: Marvell Controller designed to be + compatible with the ARM one with outer cache mode. - cache-unified : Specifies the cache is a unified cache. - cache-level : Should be set to 2 for a level 2 cache. - reg : Physical base address and size of cache controller's memory mapped @@ -29,6 +35,9 @@ Optional properties: filter. Addresses in the filter window are directed to the M1 port. Other addresses will go to the M0 port. - interrupts : 1 combined interrupt. +- cache-id-part: cache id part number to be used if it is not present + on hardware +- wt-override: If present then L2 is forced to Write through mode Example: -- cgit v1.2.3